I realize you've probably aced this subject already, but was wondering if you ever figured out 19.4. I'm having similar troubles (yes it's part of the assingment this year too :) ). To jog your memory a little, part a reads as follows:

The resolvent C: True -> P(A,B )

Clause 1 C1: P(x,y) -> Q(x,y)

Clause 2 C2: ???

Now for resolution to apply, I realise that these sentences need to be in CNF, so I converted them to the following:

C: Not(True) V P(A,B ) , which is equivalent to: False V P(A,B ) , which is equivalent to P(A,B )

C1: Not(P(x,y) V Q(x,y)), which I'll write as !P(x,y) V Q(x,y)

But for C1 and C2 to resolve to C, which has only a single literal, they must have exactly three literals between them and C1 already has two. Since Q is not in C, !Q(x,y) (or anything that unifies with that) must be C2. However, that leaves us with !P(x,y) or, at best !P(A,B ).

What am I doing wrong? Have I converted clausal form incorrectly? Or have I misunderstood the resolution step? Or do I just not fully grok Inverse Resolution yet?

Please help.]]>

Hope it helps someone else too]]>

I've just managed to register and am still awaiting material for this module. Does anyone know when the first assignment is due?]]>

In the "Inductive Learning with Inverse Deduction" section, it seems to imply that C1 /\ C2 --> C, yet exercise 19.4 has C = true implying C1 and other bits and pieces of clauses.

I just don't get it? Is this syntax used anywhere else in the book, does anyone know?]]>

I am basically just preparing myself for a horror of a paper, that way I won't hyperventilate when I see it.

Good luck to all of you in any case :)]]>

The fact that the lecturer is elusive definitely doesn't make me feel any more at ease.]]>

good luck..]]>

Are the assignment questions a good indication of exam questions? (this seems unlikely considering how long most questions took to answer).

Is anyone else just about in a flat panic ?]]>

I don't get the impression the stats stuff is that important, but I have made sure I can do the Bayesian calculations if required in the exam. I have sped past most of chapter 20 except 20.1, the neural network stuff and the support vector machines. The other stuff just seems to harp on the same theme, but in more mathematical complexity than I think we are expected to get into.

I am completely not loving chapter 21 - the sample answers are exactly that - there's nothing to connect them with the heavy theory that makes up most of this chapter and I am left completely in the dark as to where these answers came from.]]>

I did formal logic today so at least that bit I know, but same with the math stuff. Also not sure what to do on the swarm bit, know what I think is the overview of it, but would have been nice to actually get a model answer of what we need to know.]]>

Once again, I find that the only way I can get my graphs to work out to anything sensible is if I "fudge" the value of alpha over and over again in order to make the sum of all the probabilities add up to one. Must say though, I don't like this very much, even if the graph comes out fine, it somehow doesn't feel very "mathematical", it's like I am basically massaging the data to look the way I expect it to look.

I don't think this book is very good as a guide to distance education. It seems to be very thorough, but a lot of information does seem to arrive "out of thin air", with no background to it, which doesn't make me feel very confident in my understanding. I also wish it had a heck of a lot more examples as to how to practically implement all the abstract theory that is thrown around.]]>

I am planning to work through every exercise in chapters 18 - 21 as well, before the exam - is anyone else planning to do this and do you want to share answers? I have really struggled with 19.4, I just can't seem to understand the semantics here in terms of the trees in figures such as Fig.19.13 - anyone else have this problem and do you think p.703 "Inductive learning with inverse deduction" is important, considering the Tutorials just concentrated on the FOIL algorithm aka p.701 "Top-down inductive learning methods".

Another problem I am having is with the jargon - every single concept or algorithm or learning methods seems to need a name at least 5 - 10 words long and I just can't keep track of all the concepts I have read, all the names are merging into one that sounds like "Intuitive inductive explanation-based learning attribute decision net tree gooblegook googlegook something something" :(( Really struggling to differentiate between the different concepts I'm afraid.

I haven't done any maths since 2001 and my formal logic course was 2 years ago - all the differentiation and logarithms are scaring me silly - I can't seem to remember the detail any more. It used to come so easily, now I'm afraid I feel proud of myself if I can just remember what a logarithm actually is!

I am just hoping I pass - this is my 10th and last exam and then I'll be done, with luck!!]]>

I am mainly using the internet to do Assignment 3, so if anyone has managed to track down better sources, it would be a help if you could list them.]]>

Personally I just feel that it is the only real feedback we get as to how we are progressing and if we need to hand in assignments on time we should get them back in a reasonable amount of time too. 2 months for 20 odd students seems a very long]]>

Although at this point it doesn't matter anyway, having left the other half of the assignment out etc..]]>