Welcome! Log In Create A New Profile

Advanced

Revision

Posted by sheepapple 
Announcements Last Post
Announcement SoC Curricula 09/30/2017 01:08PM
Announcement Demarcation or scoping of examinations and assessment 02/13/2017 07:59AM
Announcement School of Computing Short Learning Programmes 11/24/2014 08:37AM
Announcement Unisa contact information 07/28/2011 01:28PM
Revision
April 24, 2011 01:03PM
How are you guys doing with your revision? Are you going through any past papers? I pretty much have the solving equations down (newtons/secant/bisection/gauss-jordan/jacobi/gauss-seidel) so far

The one section I'm battling with a little (not battling just going slow) is solving nonlinear equations using Newton. Thoughts/pointers to resources?
avatar Re: Revision
April 25, 2011 06:08PM
Ja, this section is not the greatest piece of mathematical writing ever produced, that's for sure.

What I'm not getting is why do f(x,y) and g(x,y) both have to be zero. We're looking at f(x,y) == g(x,y), yes, but why the extra condition? Obviously when we have f = g it follows that f-g = 0 ...

OK so then another way of putting the "intersection condition" would be to say we want f-g = 0...

But surely f = 0 if x = 0 (just looking at the graph), at which time g is at "maximum and minimum"? In other words you're specifying inequality if you insist on this. Doesn't quite make it through my skull bone into the brain cavity, I'm afraid...

I suppose I'd better go and think a bit about that and see if I can come back with anything.
Re: Revision
April 25, 2011 09:55PM
Take a look at question 2 of Jan/Feb 2009 and Question 1c of Jan/Feb 2010....
avatar Re: Revision
April 25, 2011 10:03PM
. .. ... ....
Another momentary lapse of reason there.
You can take any equation whatsoever, and rearrange it so [AllTermsArePushedHere] = 0

Take LHS = RHS for instance. Subtract RHS from both sides and you you have LHS-RHS = 0.

That's why f and g "have to be zero". We simply rearrange them in that Form.

OK, so that then takes me to just past 1.8 then.

To get to 1.8 write f and g as Taylor Series. Remember that these are functions in two variables, so their derivatives are now the sums of partial derivatives.

For f + f' (delta-x) you must now use f + fx(delta-x) + fy(delta-y)

Rearrange those and you get the "TS formulas for f and for g" in the box called 1.8

Then comes the application of Newton's Method. Now I still have to go and fiddle around there, but I'd imagine that the formula remains essentially the same. Only now your f' term downstairs is going to be an fx + fy term. This has nothing to do with those TS approxes for f and g that the rearrangement in 1.8 gives. This is from the definition of Newton's method, slightly adapted.

What you'll have is a an approximation of eg. f upstairs (you could cancel the minuses, I think), and this will make use of 1.8.
Then downstairs you have a slightly similar expression that simply got there by fiddling with the derivation.

I don't know. Just thought I'd better correct my potentially misleading "phase 1 puzzlement" first. How far am I from the actual problem zone now?
Re: Revision
April 25, 2011 10:14PM
Ja I couldn't understand why you couldn't figure out that you can make f = g = 0 or f - g = 0. Confused me quite a bit for a moment.

Anyways I went through those initial motions of taking df/dx and df/dy for f(x, y). I did the same for g(x, y) and took dg/dx and dg/dy. I then applied eqn 1.8, but got no further. Simply because I was still stuck in the back of my mind on something. I needed to know where the book got x0 and y0 to start with, but I now realise this is something supplied as Q1c of Jan/Feb 2010 supplies similar parameters - in fact the question is almost identical. Q2b of Jan/Feb 2009 is something else though and no starting points are given!

Would you say x2 + y2 - x = 2 is a flattened circle with a y-radius of 2?
avatar Re: Revision
April 26, 2011 12:13PM
smile Yes that sounds a like a good subtraction.

I've been more or less through it now, and this is my current opinion: In actual fact box 1.8 is a "Newtons method expression". Go back to the "1-D" version of Newton's method, and see that essentially you're doing little more than expressing the tangent as f'(x) = f(x)/(delta-x), and then rearranging that a bit. Now look at 1.8, and you'll see that really you just have a slightly more complicated version of the same thing.

You have partial-WRT-x + partial-WRT-y as your f'(x,y), your delta-x and delta-y are just on the other side of the equation as factors, rather than on this side and downstairs, and your f(x,y) has its very own side of equation, instead of being divided somehow by the partial derivatives.

In logical terms, the solution of that system will be true in exactly the same conditions for which a logically equivalent expression (like the more "direct" version of Newton's method) is true.

As far as the difficulty that remains goes, really there are just a lot of digits flying about in the solutions to the iterations.

You take 1.8 with values plugged into the 3 places you can get values for (leaving just the deltas as unknowns, so each time it's x1-x0 rather than just plain x1 that's unknown -- which turns out not to be such an incredibly big deal on account of that logical equivalence nonsense I was blathering on about up above).

Where do you get the 3 values you can plug into each equation in the system?
Well f and g you get from the original equations, suitably rearranged to equal zero (just for good form, I think).
The partial derivatives you get by taking d/d(whaddever) of the respective expressions.

Ja. And then it's a little bit messy, is what it is. That's all. Just a bit messy. Nothing more to keep straight/ nothing new to understand from here on. Because there are just two equations, instead of doing Gaussian elimination on the system you could use the fact that gy = -1 (just plain constant), to get a quick solution of delta-x in equation 2, and then plug that into equation 1 to get delta-y. One mess later you'll get the same numbers the text book has, and you can then wallow into the filth with these new numbers, plug them into the partial derivative expressions, plug them into the original equations, set up the "Newton system" all over again, and expect to work with even messier little numbers next time round.

Um... OK one more thing I got stuck on was the closeness of (1, -1.7) to the true solution. That point is actually just a very good first approximation. It's not an exact solution, it's an estimate.
Re: Revision
April 26, 2011 12:32PM
OK. Let me read what you posted. In the meantime I posted another question regarding Exercise 5.1.11. Check it out.
Re: Revision
April 26, 2011 01:03PM
OK! I saw something like this the other night but I dismissed it being tired. I just did a full working out of the whole thing since you have almost reaffirmed my suspicions:

In simple terms:
x1 = x0 - f(x) / df/dx

So:
-f(x) = (x1 - x0) * df/dx
-f(x) = (delta-x) df/dx

I guess now if you add a more complicated function you simply tack the additional partial derivatives on the end and bingo! For now it makes perfect sense. It just needs to sink in a bit.
avatar Re: Revision
April 26, 2011 01:17PM
Pretty much so.. but don't try to "work them together" so to speak. Just see the similarity and you'll be right. It's just a matter of satisfying oneself that the system in box 1.8 really also is "Newton's method by other means". Logical equivalence is enough for that.

You do get there by a totally different path for the two variable version, but it matches point for point, so who cares about how one gets there "as long as it's the truth".

Yes for a slightly more complicated one have a squizz at the last thing on the Taylor Series in the appendix and you'll see things like fxx fxy fyx fyy there..
Sorry, only registered users may post in this forum.

Click here to login