Welcome! Log In Create A New Profile

# bayesian learning

Posted by 35068973
Announcements Last Post
SoC Curricula 09/30/2017 01:08PM
Demarcation or scoping of examinations and assessment 02/13/2017 07:59AM
School of Computing Short Learning Programmes 11/24/2014 08:37AM
Unisa contact information 07/28/2011 01:28PM
 bayesian learning June 03, 2008 03:17PM Registered: 12 years ago Posts: 91 Rating: 0
could any of you please show me how equation 20.1 was used to get the results shown in figure 20.1a;and figure 20.1b for equation 20.2?

your help would be greatly appreciated.
 Re: bayesian learning June 03, 2008 09:54PM Registered: 14 years ago Posts: 1,682 Rating: 0
Hmmmm... I haven't yet started on the reading for Assignment 2. When I get there I will see if I understand it and can assist you. But I still have 2 assignments before I get to this material.

```
,= ,-_-. =.
((_/)o o(\_))
`-'(. .)`-'
\_/```
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
 Re: bayesian learning June 04, 2008 08:54AM Registered: 12 years ago Posts: 91 Rating: 0
thanks
 Re: bayesian learning June 18, 2008 06:00PM Registered: 14 years ago Posts: 1 Rating: 0
Suffering with the same problem, Wondering if registering for this course was such a good idea
 Re: bayesian learning June 30, 2008 08:56AM Registered: 14 years ago Posts: 1,682 Rating: 0
Im also struggling. I failed to grok it this weekend.

```
,= ,-_-. =.
((_/)o o(\_))
`-'(. .)`-'
\_/```
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
 Re: bayesian learning June 30, 2008 11:24AM Registered: 14 years ago Posts: 1,682 Rating: 0
note: this is all written monospaced - you can set that in your settings.

Maybe whats getting me is the alpha value would be different for each of the 10 selections from the bag?

Is that right.

It says on p713
Quote
AIMA
For example, suppose the bag is really an all-lime bag (h5) and the first 10 candies are all
lime; then P(d|h3) is 0.5^10, because half the candies in an h3 bag are lime

I am just guessing deperately aloud so if anyone can correct me, please do! Im struggling here looking for a straw.

If that's right then is:
P(d|h1) is 0.00^10
P(d|h2) is 0.25^10
P(d|h3) is 0.50^10
P(d|h4) is 0.75^10
P(d|h5) is 1.00^10

Thus if
P(hi|d) = P(d|hi)P(hi) then
after one lime sweet is picked

P(h1|d) = alpha * 0.00 * 0.1
P(h2|d) = alpha * 0.25 * 0.2
P(h3|d) = alpha * 0.50 * 0.4
P(h4|d) = alpha * 0.75 * 0.2
P(h5|d) = alpha * 1.00 * 0.1

and after 10 then
P(h1|d) = alpha * 0.00^10 * 0.1
P(h2|d) = alpha * 0.25^10 * 0.2
P(h3|d) = alpha * 0.50^10 * 0.4
P(h4|d) = alpha * 0.75^10 * 0.2
P(h5|d) = alpha * 1.00^10 * 0.1

But the graph must be putting in the alpha to get them all to add to 1.0?

Do with one sweet then alpha appears to be 2.0408163265306122448979591836735
h1:- 0.00 --> 0.00 (read off graph)
h2:- 0.04 --> 0.10 "
h3:- 0.20 --> 0.40 "
h4:- 0.15 --> 0.30 "
h5:- 0.10 --> 0.20 "

And for 10: it looks like alpha is 8.9562784214176628754360370763906
h1:- 0.00000000000000000000 --> 0.0? (read off graph)
h2:- 0.00000019073486328125 --> 0.0? "
h3:- 0.00039062500000000000 --> 0.0? "
h4:- 0.01126270294189453125 --> 0.10 "
h5:- 0.10000000000000000000 --> 0.90 "

I would thus assume to answer the question 20.1 we need to calculate what alpha would be for all ten different times we pick the sweet for all other 4 hypotheses?

That would be 10 * 5 * 4 = 20 * 10 times = 200 times (which involves solving for alpha a lot). In order to plot the graph? Well not just alpha. But all five points as well. Thats a lot. I plan to do this tonight.

This seems like a lot of work. So before I begin... am I right, or have I lost the plot.

Any comments, thoughts, suggestions... please don't hesitate to tell me I will appreciate it. Especially before I attack this question tonight.

```
,= ,-_-. =.
((_/)o o(\_))
`-'(. .)`-'
\_/```
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
 Re: bayesian learning June 30, 2008 12:00PM Registered: 12 years ago Posts: 91 Rating: 0
this is the generic equation (h2 for example):
p(h2|dn) = alpha * 0.2 (0.25) power of n

alpha is worked out based on each sweet; for example if the number of sweets are 10, then for this instance, the probs for h1, h2, h3, h4, h5 must add up to one, and so on...

it will be better to use excel...

BTW did you guys get your assignment mark for cos492? I had to submit mine a day later due to the public holiday (only got internet connected a month ago); i haven't received mine yet; the lecture is supposedly not in.
 Re: bayesian learning July 08, 2008 01:48AM Registered: 11 years ago Posts: 22 Rating: 0
Glory halleluja, thank heavens I am not the only person scratching my head with this question

I think I have cracked 20.1 (a), my graphs are correctly predicting the hypotheses for which I generated data. But, I am getting stuck on 20.1 (b), can't seem to make head or tail of this one

Anyway, it's late and I am sick of this question, suspect I am going to boot it and move onto the question, else I might end up circling around this forever
 Re: bayesian learning July 08, 2008 01:54AM Registered: 11 years ago Posts: 22 Rating: 0
ps. If anyone could give me a quick breakdown of how to calculate the first couple of values in Fig 20.1 (b), I will be more than happy to email on my Fig 20.1 (a) Excel spreadsheet.

I'm not sure how correct it is, but somehow, whether it is by hook or by crook, the graphs are coming out beautifully, heck knows how!
 Re: bayesian learning July 08, 2008 08:57AM Registered: 14 years ago Posts: 1,682 Rating: 0
You take all you P(h|d) values from (a).
You them multiply each one by its probability.
eg

S2=(0.25*O2) where O2 is a P(h|d).
T2=...
..
W2
You then sum them all.
eg =SUM(S2:W2)

And this sum is what you graph.

```
,= ,-_-. =.
((_/)o o(\_))
`-'(. .)`-'
\_/```
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
 Re: bayesian learning July 08, 2008 10:57PM Registered: 11 years ago Posts: 22 Rating: 0
Thanks a mil, I will give this a try!!
 Re: bayesian learning July 09, 2008 12:14AM Registered: 14 years ago Posts: 1,682 Rating: 0
If you figure out how to work out the MAP stuff on the next question, please assist me. Its confusing me for some reason and I just left it out and moved on.

```
,= ,-_-. =.
((_/)o o(\_))
`-'(. .)`-'
\_/```
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
 Re: bayesian learning July 10, 2008 09:45PM Registered: 11 years ago Posts: 22 Rating: 0
Will do! I'm giving it a crack tonight, will let you know how far i get
 Re: bayesian learning July 10, 2008 11:12PM Registered: 11 years ago Posts: 22 Rating: 0
hmm, I had a look at this and the best i can come up with is that the ML statistician would just take drug B (the most likely fix), but the Bayesian would take both drugs and wait for more data to reveal the true illness.

In the second scenario, the ML probably wouldn't break disease B down into 2 seperate hypotheses (keeping it simple), but the Bayesian would probably split the hypotheses into 3 seperate hypotheses.

I'm afraid this is not based on anything more concrete than my very vague understanding of Bayesian statistics, so take anything I say with a pinch of salt!

Anyone else have any better ideas?
 Re: bayesian learning July 14, 2008 10:24AM Registered: 14 years ago Posts: 3,747 Rating: 0
Can someone please post a picture of there two graph for H3 (20.1) - I just want to see if I am on the right track or if I have made some kind of terrible mistake.

--
"Knowledge has much better uses than self-pity and superiority"
 Re: bayesian learning July 14, 2008 01:30PM Registered: 14 years ago Posts: 1,682 Rating: 0

```
,= ,-_-. =.
((_/)o o(\_))
`-'(. .)`-'
\_/```
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
 Re: bayesian learning July 14, 2008 03:01PM Registered: 14 years ago Posts: 3,747 Rating: 0
Thanks, As I suspected I was off...
Think I finally have it right now though :/

--
"Knowledge has much better uses than self-pity and superiority"
 Re: bayesian learning July 14, 2008 03:40PM Registered: 14 years ago Posts: 3,747 Rating: 0
Hmmm, if my H3 looks like yours then my H5 is off slightly (Same shape to as in book but some of the plotted values differ).
Does your H5 match up completely with that of book?

--
"Knowledge has much better uses than self-pity and superiority"
 Re: bayesian learning July 14, 2008 09:58PM Registered: 14 years ago Posts: 1,682 Rating: 0
Is yours starting at zero or one? Mine is slightly different because I begin at one.

```
,= ,-_-. =.
((_/)o o(\_))
`-'(. .)`-'
\_/```
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
 Re: bayesian learning July 14, 2008 10:19PM Registered: 14 years ago Posts: 3,747 Rating: 0
zero

--
"Knowledge has much better uses than self-pity and superiority"
 Re: bayesian learning July 14, 2008 11:16PM Registered: 14 years ago Posts: 1,682 Rating: 0
well also it will depend on your sample set, the order in which cherries and limes arrive.

```
,= ,-_-. =.
((_/)o o(\_))
`-'(. .)`-'
\_/```
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
 Re: bayesian learning July 14, 2008 11:35PM Registered: 14 years ago Posts: 3,747 Rating: 0
Yeah, H5 is all limes though. So the sample set will always be the same
Basically my H5 comes out in a very very similar shape to that of the book, but some of the data points don't quite match up to what the book claims, So I am wondering if anyone else is seeing the same or if I have yet another mistake.

Although at this point it doesn't matter anyway, having left the other half of the assignment out etc..

--
"Knowledge has much better uses than self-pity and superiority"
 Re: bayesian learning January 26, 2009 12:01AM Registered: 11 years ago Posts: 22 Rating: 0
I've been going over question 20.1 this weekend again (I remember breaking my head on this last time round as well aaargghh!).

Once again, I find that the only way I can get my graphs to work out to anything sensible is if I "fudge" the value of alpha over and over again in order to make the sum of all the probabilities add up to one. Must say though, I don't like this very much, even if the graph comes out fine, it somehow doesn't feel very "mathematical", it's like I am basically massaging the data to look the way I expect it to look.

I don't think this book is very good as a guide to distance education. It seems to be very thorough, but a lot of information does seem to arrive "out of thin air", with no background to it, which doesn't make me feel very confident in my understanding. I also wish it had a heck of a lot more examples as to how to practically implement all the abstract theory that is thrown around.
Sorry, only registered users may post in this forum.

Click here to login