I realize you've probably aced this subject already, but was wondering if you ever figured out 19.4. I'm having similar troubles (yes it's part of the assingment this year too :) ). To jog your memory a little, part a reads as follows:

The resolvent C: True -> P(A,B )

Clause 1 C1: P(x,y) -> Q(x,y)

Clause 2 C2: ???

Now for resolution to apply, I realise that these sentences need to be in CNF, so I converted them to the following:

C: Not(True) V P(A,B ) , which is equivalent to: False V P(A,B ) , which is equivalent to P(A,B )

C1: Not(P(x,y) V Q(x,y)), which I'll write as !P(x,y) V Q(x,y)

But for C1 and C2 to resolve to C, which has only a single literal, they must have exactly three literals between them and C1 already has two. Since Q is not in C, !Q(x,y) (or anything that unifies with that) must be C2. However, that leaves us with !P(x,y) or, at best !P(A,B ).

What am I doing wrong? Have I converted clausal form incorrectly? Or have I misunderstood the resolution step? Or do I just not fully grok Inverse Resolution yet?

Please help.]]>

In the "Inductive Learning with Inverse Deduction" section, it seems to imply that C1 /\ C2 --> C, yet exercise 19.4 has C = true implying C1 and other bits and pieces of clauses.

I just don't get it? Is this syntax used anywhere else in the book, does anyone know?]]>

I am basically just preparing myself for a horror of a paper, that way I won't hyperventilate when I see it.

Good luck to all of you in any case :)]]>

The fact that the lecturer is elusive definitely doesn't make me feel any more at ease.]]>

good luck..]]>

Are the assignment questions a good indication of exam questions? (this seems unlikely considering how long most questions took to answer).

Is anyone else just about in a flat panic ?]]>

I don't get the impression the stats stuff is that important, but I have made sure I can do the Bayesian calculations if required in the exam. I have sped past most of chapter 20 except 20.1, the neural network stuff and the support vector machines. The other stuff just seems to harp on the same theme, but in more mathematical complexity than I think we are expected to get into.

I am completely not loving chapter 21 - the sample answers are exactly that - there's nothing to connect them with the heavy theory that makes up most of this chapter and I am left completely in the dark as to where these answers came from.]]>

I did formal logic today so at least that bit I know, but same with the math stuff. Also not sure what to do on the swarm bit, know what I think is the overview of it, but would have been nice to actually get a model answer of what we need to know.]]>

I am planning to work through every exercise in chapters 18 - 21 as well, before the exam - is anyone else planning to do this and do you want to share answers? I have really struggled with 19.4, I just can't seem to understand the semantics here in terms of the trees in figures such as Fig.19.13 - anyone else have this problem and do you think p.703 "Inductive learning with inverse deduction" is important, considering the Tutorials just concentrated on the FOIL algorithm aka p.701 "Top-down inductive learning methods".

Another problem I am having is with the jargon - every single concept or algorithm or learning methods seems to need a name at least 5 - 10 words long and I just can't keep track of all the concepts I have read, all the names are merging into one that sounds like "Intuitive inductive explanation-based learning attribute decision net tree gooblegook googlegook something something" :(( Really struggling to differentiate between the different concepts I'm afraid.

I haven't done any maths since 2001 and my formal logic course was 2 years ago - all the differentiation and logarithms are scaring me silly - I can't seem to remember the detail any more. It used to come so easily, now I'm afraid I feel proud of myself if I can just remember what a logarithm actually is!

I am just hoping I pass - this is my 10th and last exam and then I'll be done, with luck!!]]>