Welcome! Log In Create A New Profile

Advanced

Example paper - Solution needed

Posted by Student08 
Announcements Last Post
Announcement SoC Curricula 09/30/2017 01:08PM
Announcement Demarcation or scoping of examinations and assessment 02/13/2017 07:59AM
Announcement School of Computing Short Learning Programmes 11/24/2014 08:37AM
Announcement Unisa contact information 07/28/2011 01:28PM
Example paper - Solution needed
September 25, 2007 03:11AM
Does anyone have a solution to the exam paper TUT103/2007?

I am specifically looking for the neural network question1

Regards confused smiley
Re: Example paper - Solution needed
September 25, 2007 11:38AM
I actually believe that the examiners need to provide the solutions to this past paper. It is a difficult subject to do, and i am having a huge problem trying to understand what is needed to be done when the questions are asked

Ive put alot of my preping for the upcoming exams into this subject, and when i look at the past paper, i dont know what is to be done. If we had the solutions, at least then we could know if we are on the right track as to what is requested from the questions

Regards
Re: Example paper - Solution needed
September 26, 2007 12:49AM
Yea,
I feel like have the wrong handbook when looking at the exam questions.
Re: Example paper - Solution needed
September 26, 2007 03:30PM
I am also struggling with, especially question 1, the exam paper.

1.2 (0 1 0) = 0 or 1 and 0 = 0
(1 0 0) = 1 or 0 and 0 = 1
(0 0 1) = 0 or 0 and 1 = 0
(1 1 1) = 1 or 1 and 1 = 1

1.3 W : weight associated with the i-th connection
c : learning rate parameter
d : desired output
f : output of the perceptron on the example
X : input vector

1.4 W1 - 0.0
threshold - 0.0
c - 1.0
d1 - 0 (is this from 1.2 ?)
f1 - or should this be 0 from 1.2
X1 - ?

Does anyone know of any examples that we can work through. I have found the following websites: file:///D:/Study_2007/COS351-D/Notes/20_C/C20_311perc.htm and www.cs.vu.nl/~elena/slides03/nn06_1_2light.ppt.

But I keep walking into the same wall, what is f and x?
Re: Example paper - Solution needed
September 26, 2007 04:35PM
I am desperately looking for Tutorial Letter 103; I did not receive it in the post yet and it does not seem to be available on myUnisa either. If somebody has an electronic copy, could they kindly e-mail it to charlvn@charlvn.za.net - much appreciated!!
avatar Re: Example paper - Solution needed
September 27, 2007 07:55AM
I've sent you a copy.
Anonymous User
Re: Example paper - Solution needed
September 27, 2007 01:08PM
please anyone email me a copy of COS 351 D past exam paper. Any additional tips for the exam from anywhere please. seems we are almos completely in the dark with this module.

emails: jmakore@gov.bw joshmmakore@yahoo.com
Re: Example paper - Solution needed
September 27, 2007 02:20PM
Reanie: Many thanks! Much appreciated! I owe you cake. smiling smiley

Soco: Sent you a copy as well to both e-mail addresses provided.
Anonymous User
Re: Example paper - Solution needed
September 27, 2007 02:31PM
Thanks Charland good luck all
avatar Re: Example paper - Solution needed
September 28, 2007 05:52PM
Please send me a copy of past paper too.
smile

ilan.pillemer@gmail.com

I plan to have learnt neural networks by the end of the weekend - I will post my answer if you give me the past paper!

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
avatar Re: Example paper - Solution needed
September 28, 2007 06:24PM
Please send me a copy too spinning smiley sticking its tongue out

trishennaidoo@gmail.com

Thank you
Re: Example paper - Solution needed
September 29, 2007 03:40PM
ilanpillemer & NinjaMojo: I just sent off the tutorial letter to the e-mail addresses you provided.
avatar Re: Example paper - Solution needed
September 29, 2007 11:04PM
The fixed error increment formula is given in the text book as formula 20.12; but it only makes sense why at first it looks different to the formula in the paper; when you read foot note 10 at the bottom of the page. Taking the footnote into consideration the formulas become the same.

Thus
W is the weight of the input
c is learning rate
d is desired outcome
f is actual outcome
(d - f) is error thus it is 0 if there is no error and no change happens.
(d - f) is thus positive or negative depending on the kind of error ie -1 or 1
x is actual input (that seems to mean that if the test input is (0,0,0) then no learning can occur - whether there is error or not?!?)

Am I confused?

I will actually apply it tomorrow to train the network. I hope it works and I am not confused.

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
Re: Example paper - Solution needed
September 30, 2007 06:31AM
I just do not get this train the neural network.

Regarding the learning algorithm figure 20.21, page 742 and question 1.4 from tut103:

f = x1 or x2 and x3
initial weights = 0.0 I assume this means W1 = 0, W2 = 0 and W3 = 0
initial threshold = 0.0 (I am not sure what this means.) Maybe if g(in) >= 0.0 then output = 1 and if g(in) < 0.0 then output = 0?
constant learning rate = 1

So, if we take the first training example (0 1 0):
in = sum(Wixi)
= W1x1 + W2x2 + W3x3
= 0.0 + 0.1 + 0.0
= 0

then g(in) = Is in >= 0
= yes
= 1

and y = 0 or 1 and 0
= 0

then Err = y - g(in)
= 0-1 = -1

Now Wi = Wi + c * Err * xi
W1 = W1 + c * Err * x1
= 0 + 1 * -1 * 0
= 0

So weight vector is now (0 ? ?)

Do we now repeat on the same example for x2 and x3?

When do you adjust the wights?

I am definitely confused.
avatar Re: Example paper - Solution needed
September 30, 2007 01:31PM
[removed because wrong]

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
avatar Re: Example paper - Solution needed
September 30, 2007 02:32PM
[removed because wrong]

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
Re: Example paper - Solution needed
September 30, 2007 03:22PM
I am going to ask a very stupid question:

How do you adjust the weights?
avatar Re: Example paper - Solution needed
September 30, 2007 04:06PM
[removed because wrong]

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
avatar Re: Example paper - Solution needed
October 01, 2007 11:52AM
[removed because wrong]

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
Re: Example paper - Solution needed
October 01, 2007 12:22PM
I still am not sure if I get this, but this is ho far I got:

Wi+1 = Wi + c(di - fi)xi

c = 1 and threshold = 0

x = (0 1 0) W = (0 0 0)

d1 = 0 v 1 ^0 = 0
f1 = 0*0 + 0*1 + 0*0 = 0
= 0 (because 0 >= threshold f1 = 1) [Is this correct? If input >= threshold then output = 1 else output = 0]
= 1

W1 = 0 + 1(0 - 1) 0
= 0
W2 = 0 + 1(0 - 1)1
= -1
W3 = 0 + 1(0 - 1)0
= 0

The new weights are (0 -1 0)

From the above I got to the following:

x w new W
0 1 0 0 0 0 0 -1 0
1 0 0 0 -1 0 0 -1 0
0 0 1 0 -1 0 0 -1 -1
1 1 1 0 -1 -1 1 0 0

0 1 0 1 0 0 1 -1 0
1 0 0 1 -1 0 1 -1 0
0 0 1 1 -1 0 1 -1 -1
1 1 1 1 -1 -1 2 0 0

This is where I stopped, I had to get to work. I will carry on tonight.
avatar Re: Example paper - Solution needed
October 01, 2007 01:31PM
I corresponded with the lecturers and have an answer to the problems. grinning smiley

Here is a cpp program that trains the network and the output.
The trick is setting an appropriate bias. It does not train properly without a bias. It does train properly with a bias of -1. As demonstrated below. Howvever for a bias of -2 or a bias of 0; it would not be trained perfectly.

Any thoughts on how to choose a bias?

//Program to illustrate the perceptron training rule

#include <iostream>
#include <cstdlib>
#include <ctime>
#include <cmath>

using namespace std;

int
main()
{
      float w [4]; //array to hold perceptron weights
      float t, o;
      const double eta = 1;
      int i;
      int count = 0;
      bool correct = 0;

      //define and initialize training data
      int train [4] [4] =              {  0, 1, 0, 0,
                                          1, 0, 0, 1,
                                          0, 0, 1, 0,
                                          1, 1, 1, 1 };

      int x0, x1, x2, x3; //will hold the inputs
      float weightsum, deltaw1, deltaw2, deltaw3;

      //initialize random number generator
      srand((unsigned)(time(0)));
      rand();

      //set bias
      x0 = 1;
      //w[0] = fabs((float)(rand())/(32767/2)-1);
      w[0] = -1;
      for ( i = 1; i < 4; ++i)
	//  w = (float)(rand())/(32767/2) - 1;
	w = 0;
 
      cout << "INITIAL WEIGHTS" << endl
             << "---------------" << endl;
      cout << "w0 = " << w[0] << endl;
      cout << "w1 = " << w[1] << endl;
      cout << "w2 = " << w[2] << endl;
      cout << "w3 = " << w[3] << endl;
     

      //implement perceptron training rule as long as training examples are
      //classified incorrectly
      while(!correct)
      {
            correct = 1;
            count++;
            for (i = 0; i < 4; ++i)
            {
                  x1 = train[0]; //input for x1
                  x2 = train[1]; //input for x2
		  x3 = train[2];
                  //find weighted sum of inputs and threshold values
                  weightsum = x0 * w[0] + x1 * w[1] + x2 * w[2] + x3 * w[3];
                  //determine output of perceptron
                  if (weightsum > 0) o = 1;
                  else o = 0;
                  cout << "training set no " << i + 1;
		  cout << " o = " << o;
                  //determine true ouput
                  t = train[3];
		  cout << " d = " << t << endl;
                  //if the ouput is incorrect adjust the weights.
                  if (o != t)
                  {
                        deltaw1 = eta * (t - o) * x1;
                        w[1] = w[1] + deltaw1;
                        deltaw2 = eta * (t - o) * x2;
                        w[2] = w[2] + deltaw2;
			deltaw3 = eta * (t - o) * x3;
                        w[3] = w[3] + deltaw3;
                        correct = 0;
			cout << endl << endl;
			cout << "NEW WEIGHTS" << endl
			     << "-------------" << endl;
			cout << "w0 = " << w[0] << endl;
			cout << "w1 = " << w[1] << endl;
			cout << "w2 = " << w[2] << endl;
			cout << "w3 = " << w[3] << endl;

                  }
            }
      }

      //OUTPUT
      cout << endl << endl;
      cout << "FINAL WEIGHTS" << endl
             << "-------------" << endl;
      cout << "w0 = " << w[0] << endl;
      cout << "w1 = " << w[1] << endl;
      cout << "w2 = " << w[2] << endl;
      cout << "w3 = " << w[3] << endl;
      cout << "The delta rules were invoked " << count << " times"<<endl;
      return 0;
}

OUTPUT
INITIAL WEIGHTS
---------------
w0 = -1
w1 = 0
w2 = 0
w3 = 0
training set no 1 o = 0 d = 0
training set no 2 o = 0 d = 1


NEW WEIGHTS
-------------
w0 = -1
w1 = 1
w2 = 0
w3 = 0
training set no 3 o = 0 d = 0
training set no 4 o = 0 d = 1


NEW WEIGHTS
-------------
w0 = -1
w1 = 2
w2 = 1
w3 = 1
training set no 1 o = 0 d = 0
training set no 2 o = 1 d = 1
training set no 3 o = 0 d = 0
training set no 4 o = 1 d = 1


FINAL WEIGHTS
-------------
w0 = -1
w1 = 2
w2 = 1
w3 = 1
The delta rules were invoked 2 times

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
Re: Example paper - Solution needed
October 01, 2007 07:32PM
Is there an error here?
The pictures in 20.17 shows the weight to be + but the rules for perceptron says that the total the sum including weight should be positive for 1 output.

Should the W0,i not be - in these pictures?

confused smiley
Anonymous User
Re: Example paper - Solution needed
October 01, 2007 08:39PM
I did not get tut letter 103 either. Can someone mail it to me pls.
Very nice of unisa and our lecturers to send something like that out, and not to all students, and then not even have it in the download material here on osprey or on myunisa.
Tx to whoever mail it, should've given you the bucks I paid the Unisa for this course.
avatar Re: Example paper - Solution needed
October 01, 2007 10:21PM
[Edited]
Tut 103 is on Osprey downloads, the COS351D_2007_103.pdf file.
Re: Example paper - Solution needed
October 02, 2007 02:27AM
ianpillimer - Have you figured out how to get the bias weight?

I found this -
The Learning Rule

The perceptron is trained to respond to each input vector with a corresponding target output of either 0 or 1. The learning rule has been proven to converge on a solution in finite time if a solution exists.

The learning rule can be summarized in the following two equations:
b = b + [ T - A ]
For all inputs i:
W(i) = W(i) + [ T - A ] * P(i)

Where W is the vector of weights, P is the input vector presented to the network, T is the correct result that the neuron should have shown, A is the actual output of the neuron, and b is the bias.

look here http://www.codeproject.com/cs/algorithms/NeuralNetwork_1.asp
Re: Example paper - Solution needed
October 02, 2007 05:35AM
I found something similar on the following website: http://hagan.okstate.edu/4_Perceptron.pdf

w1 new = w1 old + ep
= w1 old + (t - a)p. (4.34)

This rule can be extended to train the bias by noting that a bias is simply
a weight whose input is always 1. We can thus replace the input in Eq.
(4.34) with the input to the bias, which is 1. The result is the perceptron
rule for a bias:

b new = b old + e

e: perceptron error
b: bias
avatar Re: Example paper - Solution needed
October 02, 2007 08:36AM
Hi,

If you do add that bias changing rule; it does always work for a linearly seperable function.

The problem is setting a suitable starting bias.

And the problem with this is that the training set is not complete (notice <0 1 1> is not in the training set.

If you start with a bias of -1 then the resulting network will correctly respond to the input <0 1 1>. If you choose a different starting bias; it won't it. (It will work correctly on the training set.)

I have also realised that changing the threshold is the same as changing the bias.

You could minus the error from the threshold; for the same effect.

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
Re: Example paper - Solution needed
October 02, 2007 08:47AM
I started with an initial bias of 0. The weights I ended up with were (2 -1 0), and just as you say it did not work with x(0 1 1).

I noticed the question in tut103 mentioned, set the initial threshold to 0.0. I suppose this means that, as you mentioned, it should also change.
avatar Re: Example paper - Solution needed
October 02, 2007 09:23AM
And changing the theshold is the same as changing the bias.

 
  ,= ,-_-. =.
 ((_/)o o(\_))
  `-'(. .)`-'
      \_/
http://ilanpillemer.com
Entia non sunt multiplicanda praeter necessitatem
Re: Example paper - Solution needed
October 02, 2007 11:15AM
Hey guys,

I wouldn't mind a copy of the answers for 103 if someone would be so kind smiling smiley

You can email me at peter@neurosys.co.za
Sorry, only registered users may post in this forum.

Click here to login