Welcome! Log In Create A New Profile

Advanced

Jan/Feb 2009

Posted by sheepapple 
Announcements Last Post
Announcement SoC Curricula 09/30/2017 01:08PM
Announcement Demarcation or scoping of examinations and assessment 02/13/2017 07:59AM
Announcement School of Computing Short Learning Programmes 11/24/2014 08:37AM
Announcement Unisa contact information 07/28/2011 01:28PM
Jan/Feb 2009
April 30, 2011 12:27PM
OK, I shot through Q1. Its obviously quite straight forward and mostly out the textbook.

f(x, y) = cos2(x) - y
g(x, y) = x2 + y2 - x - 2

Q2 has me a little stumped. I've calculated all the partial derivative equations df/dx, df/dy, dg/dx and dg/dy.

Once more I have no idea how to determine what the starting x and y values should be now. I had this very same problem some weeks ago and ended up assuming they will give them to us.

How the heck are we supposed to know what f(x, y) and g(x, y) look like? I can see f is a squashed cos graph which never goes below zero. I can assume g(x, y) is some kind of oval with a y radius of 2, but how can we figure out where to start?
avatar Re: Jan/Feb 2009
April 30, 2011 12:37PM
OK with iterative methods you can just about start with any value you like.

That overstates it, perhaps, but your starting values are fairly arbitrary. If the series converges from your starting point, even if you start quite far away you're in with a chance of the series walking you along until by magic you arrive at a good estimate of the root.

So then "give yourself your own starting values, according to your current whim".

Of course you can do better than a whim. You can sketch a rough graph and pick some nice values to try, by eye, for instance. This is just a "nice to have" though.

For the shape of the graph you'd plug in several values for x into your calculator, and make a table of values to plot from. Lots of work, but perhaps that's where the marks are in such a case. In other words often the intercepts, minima and maxima, inflection points etc. aren't always enough.
Re: Jan/Feb 2009
April 30, 2011 12:39PM
OK cool. Just regarding picking starting points, it would probably be a bad idea to select ones where f' is zero. This could result in failure right?
avatar Re: Jan/Feb 2009
April 30, 2011 12:45PM
Eee... ja, I suppose f' = 0 would do something nasty. So I suppose the choice is not utterly arbitrary.
Re: Jan/Feb 2009
May 01, 2011 09:09PM
So how much have you guys got through?

Has nobody correctly completed question 2b?

I've done question 3 a and b - not bad if you've taken linear algebra before. You can cross check your work here: http://www.math.odu.edu/~bogacki/cgi-bin/lat.cgi

3c-f needs some clarity. Can someone suggest some further info on the answers below?
3c) Iterative methods converge with each iteration on the solution whilst direct methods do not. Direct methods solve systems of equations by reducing their matrix representations to row-echelon or reduced row-echelon form
3d) Iterative methods work much better for sparse systems??
3e) Check the textbook. Row-echelon vs reduced row echelon
3f) Gauss jordan reduces to upper triangular. By substitution one can then quickly solve the entire system....??

Question 4a I got 0.1664424
4b) 6th degree?
4c) I tested for continuity and then 1st derivative equality of the cubic functions at the joint. All seems good so it appears to be a cubic spline

Question 5 was a little computationally tricky but I honestly haven't spent time on least squares. My answer was y = -68.125 + 449.625x - 474.15x2

Question 6a - no clue at all
6b) 505 panels!??! Sounds insane.

Ed, Rotti, please could you guys supply some feedback on these answers.
avatar Re: Jan/Feb 2009
May 01, 2011 09:17PM
OK I see you must supply a graph.

In this case it would be two graphs on one set of axes obviously. Where the points of intersection are, there (approximately) lie the solutions of the system of equations you have for Newton's method here.

With graphs that are nonlinear near zero, you can end up in the kind of trap they show for (I think) the secant method (which is very similar), so Newton's method doesn't always converge. The choice of starting point is not at all arbitrary. I was talking rubbish, it turns out. Ideally you want a first estimate as close to your solution as possible. So the graph needs to be fairly neat near the point where f and g meet.

OK, the graphs... The square of cos. That's going to oscillate between y=0 and y=1 ... with the same period, right? Where cos is zero, so will its square be so. For the other one maybe build a "table" (plotted straight onto the paper to save time) taking say y = sqrt(x-x2+2) and just see what it does?

Where did I get the idea that starting points are arbitrary? I think it's because normally with Jacobi or Gauss-Seidel you begin by pretending to assume your vector is [0 0 0 0 0]T
Re: Jan/Feb 2009
May 01, 2011 09:26PM
OK, sure. Have you got any workings/answers to share per my previous posting. I need more info/cross checking on my logic.
avatar Re: Jan/Feb 2009
May 01, 2011 09:34PM
Crossed lines there... I was reporting back on the earlier post.

OK, shooting from the hip as usual ...
3(a) Linear algebra. Reduce to solve for the identity matrix. Only tricky bit could be that "Gaussian" requirement, which doesn't go all the way through to initial 1's. ... Oh hang on, I'm thinking of another paper there. Here you could just use MAT103, and go all the way. No "Gaussian" specified. Phew.

So you'd have to reduce using multipliers as described in the book, and then back substitute?

Also it looks it's begging to be pivoted, but then we have an inverse of some other A' ?

3(b) Try to make det(A) = 0? Until I've done the reduction I'll have to hang fire there. Make the solution for A-1 require a zero along the diagonal..

3(c) Ja, that looks about right. Maybe a bit of digging could expand it or something...

3(d) Yes, sparse matrix rings a bell.
I think the thing to consider would be effects on something like rounding error, too?

3(e) Yep. Reduced row echelon.

3(f) Advantage of Gaussian is that it takes (I think) on average 50% of the number of operations. Gauss-Jordan is expensive. Should be in that section of the book.
Advantage of G-J? ... Certainly nice when working by hand. Get to your 1's and there's no further back substitution. Just read off the answer. .. But that wouldn't do for an answer.

OK let me post and look a bit more closely at the next one.
avatar Re: Jan/Feb 2009
May 01, 2011 09:46PM
4(b) I think you're right about the number. As for the why part it would because your polynomial at that point only involves 7 unknowns? So 6 points, and the theorem says there's a unique polynomial of degree 6 or less through any 6+1 = 7 points?

4(c) You also need matching second derivatives, but I'm pretty sure that's the right way, in general.
Re: Jan/Feb 2009
May 01, 2011 09:52PM
> 4(c) You also need matching second derivatives,
> but I'm pretty sure that's the right way, in
> general.


I'm not sure how true this is as the second derivative. Do you have a resource which confirms this to be 100% certain? I can't find anything in that regard...
avatar Re: Jan/Feb 2009
May 01, 2011 09:55PM
I think in an emergency least squares could be reduced to the algorithm they give 'in grey' at the end of that section of the book. (Or am I forgetting some formulas that still need to be applied, as happens once you've struggled your way through a splines matrix?).

So make up a table of sums of powers of x. Make a matrix out of that with N in the upper left hand corner, and some great big number down in the bottom right corner (here it's what? 5th powers?). Make a vector involving those same sums of powers and the sum of all the Yi for the RHS. .. And then it's just fairly computationally intense, as you say. I've tried to find a simpler solution (did APM113 and there one used the associated normal system, but that strikes me as just more work by hand, so I've hoped not to have to go there). Basically I suspect there's no easy way out. Maybe do a few of the calcs to show you know what you should be doing, and move on to another question.

If you never get enough time to come back to your calculations for the least squares, you've probably done enough to have a fighting chance of surviving. If you do come back, hopefully you can rescue yourself by finishing that job well.. I must stop being pessimistic like this, eh? Stupid mindset to be in just before exams.
avatar Re: Jan/Feb 2009
May 01, 2011 10:02PM
The section on cubic splines gives four conditions a spline must meet at the knots on p 170 (equation xxx.d)

This piece of spline has a function gi .... o ..(a knot)... gi+1 is next door.

The x coordinate of their knot is xi+1 (gi runs from xi to xi+1)

The equations given on p170 mostly have the form "gi-of-xi+1" = "gi+1-of-the-same".

They must match in position, in "slope" and in curvature, as I read the equations.
avatar Re: Jan/Feb 2009
May 01, 2011 10:14PM
Least squares error?

I'm hitting a complete blank on this one. Better go check.

And then? Number 6. TS would be as per the derivatives section that has rusted solid in my head...

It looks a bit like a backward series for some reason I can't adequately account for.

As for the error (6.b) perhaps you made a mistake differentiating. Not the friendliest derivative in town there. Product rule, power rule .. chain rule? No, at least you don't seem to initially have any chain rules to contend with, but a product rule is sure to expand that to a clumsy string of symbols, eh?

I'd better go and do a few properly then. I'll start with a brave attempt at the impossible one, and work backwards from there (since you seem to have the less evil ones pretty well covered).
avatar Re: Jan/Feb 2009
May 01, 2011 10:53PM
Just so's you don't feel lonely being bewildered by 6.a, I'll just confirm that I'm joining that club.

I think the answer begins somewhere around the very bottom of p259, but one can never tell with these things.

If my assumption is correct, you're doing a differential equation on the Taylor Series for f(x+h), and solving for f''(x+h), in a similar fashion to the way in which they solve for f'(x) there. A straightforward solution, though is going to have an f'(x) term in it, because you have:

f(x+h) = f(x) + f'(x)h + f''(x)h2/2! + f'''(xi)h3/3! implying that:
f"(x) = 2/h2 [- all those other terms, including f']

Now if somehow that produced the desired expression in the terms they give, one could simply charge off and proclaim the f''' term the error (which is what is asked), but it doesn't work out.

The 2f(x+h) term looks suspiciously like some other equation got added or subtracted there... So perhaps we're adding up two TS like they do for the Central Difference approx. on p260? This would probably mean that if the f' term was being cancelled out, that the f''' term was, too, and that the error is the f(4) term. Then I think you're looking at O(h4-2) error? All just speculation until someone delivers the required equation/ workings.

I think my strategy for a similar question in October was to stare coldly at it, and try to make it chicken out. Didn't work, I'm afraid.
avatar Re: Jan/Feb 2009
May 02, 2011 12:59PM
Well at least I've managed to find a match for the expression here:

http://en.wikipedia.org/wiki/Finite_difference

(About 1/3 of the way down, under "Higher Differences"winking smiley. I seem to recall imagining it may somehow be a backward difference? Well it's the second order forward difference.

That should be enough to get the error term from? It'll be a forward Taylor Series, solved for f'' (I think), or a system of these, perhaps.
avatar Re: Jan/Feb 2009
May 03, 2011 06:30PM
I think 4b would be accurate to only the second difference?

(So it would fall under my separate "silly mistake" post, so I'll leave it)

If you were dead in the middle, would you have 6 degrees? Not sure.
Re: Jan/Feb 2009
May 04, 2011 10:04AM
Too much rambling tongue sticking out smiley

I've mailed the lecturer - lets see what she says.
avatar Re: Jan/Feb 2009
May 04, 2011 04:02PM
There might be something wrong with the "maximise f(2) first" approach to these error expressions for integrals. You mention up there that for "6b" you had 505 panels. Now I've just done the corresponding question (on error) in Jun 2010's paper and I end up with 775 panels on [1,2] ...

That just has to be wrong. If you used 775 panels of just plain "fevals" you'd do no worse for much less work, I reckon. And the function in this case was x ln x, which is not the roughest on Earth...

Unless I'm making a calculation error. One always hopes that it's just a calculation error...

But hang on a moment. Look at the degree of accuracy that's being asked for. 10-5. That's a lot of digits, yes? So then perhaps one must expect to do a lot of work to get that kind of accuracy? Maybe we're wrong to be alarmed by the sheer number of panels required? If you were looking for a lower accuracy, yes, you'd use few panels; but as soon as you look for high accuracy you need more panels -- which is why you're using a computer to do this job in the first place.

I think that until I hear of some fundamental principle that makes this method bad, I'm going to stick to my guns with it. It makes sense to me. I can't see another way that makes better sense. If the only problem is seemingly large numbers of panels, I think I'll only be alarmed where low accuracy is required.
Re: Jan/Feb 2009
May 05, 2011 09:04AM
I am SICK of UNISA now. I phone Dr Rapoo's office all the time and she is NEVER there. Her secretary now tells me that UNISA is an ODL institute and we must figure out our own problems. This really pisses me off! I've escalated the matter to Prof Labuschagne and his acting Director in his absence.
Re: Jan/Feb 2009
May 05, 2011 09:10AM
Regarding 6b, heres some working from my end:

f''(x) = 2 cos(x) - 2x sin(x) - 2 sin(x) - 2x cos(x)

Using the error term formula:

- ((b-a)/12) * h2 * f''(E) < 0.0001

I get 504.991489. Please can you show me your workings - for now don't stress with a lengthy explanation. I end up getting confused tongue sticking out smiley I prefer to figure it out from the workings myself.

Cheers!
avatar Re: Jan/Feb 2009
May 05, 2011 12:29PM
Hmm.. OK look the workings are in some mess of papers on the desk right now. If they're not there they're already in the dustbin. Main thing is we're getting answers of the same order of magnitude here. Hundreds over halfway to a thousand.

You're missing out how you maximise f'' there. I think that might be where we'd differ in the details.

(Yes, I'm also missing out my choice of x value to maximise f'' ... because right now it's "lost", as I said. If there were an x2 term one could just ignore sin and cos. If x > 1 is possible an ordinary x term will also be dominant... But the big thing is your workings look perfectly good to me, and to get your E to the required precision you're needing lots of panels.)

If you require great precision you're going to have to slice up the interval finely. I don't think big values for n are necessarily cause for alarm in themselves.

Will look for this or redo it and try to remember to report back.
Re: Jan/Feb 2009
May 05, 2011 12:43PM
OK so its just E that I'm messing up. Will run through the calcs again.
avatar Re: Jan/Feb 2009
May 08, 2011 08:07PM
Don't know if this is a bit late for this, but it looks like the 2x cos x term should be x2 cos x?

And then when deciding how to maximise, observe that sin and cos are bounded between [-1,1], so x2 is the dominant term (or even get fancy and say the expression is O(x2) ), so we just plug in the biggest x value, which is pi.

2 cos(pi) - 2 pi sin(pi) + 2(pi) sin(pi) + 2pi cos pi = -2 -0 + 0 - 2pi = 10.82497783

Feed that into the solution for h.

How does that look? Are we allowed to work exactly like that, I wonder? Presumably our computing system doesn't know niceties like sin(pi) = 0, exactly? No, I suppose it should. Why not?
Re: Jan/Feb 2009
May 08, 2011 08:38PM
Ja i made a complete hash of the differentiation. The answer should be 2cos(x) - 2x sin(x) - 2x sin(x) - x2cos(x)

which simplifies down to 2cos(x) - 4x sin(x) - x2cos(x)

Have double-checked with matlab.

I used PI (whether this is correct is anyones guess) for Xi and my new answer is 451 panels

[Edit] E-mails from lecturer agree with 451. smiling smiley
avatar Re: Jan/Feb 2009
May 08, 2011 08:59PM
I'm messing up somewhere. I get 186.67, so 188 to make it even. I'd better rerun my "plugin" to the derivative a bit more carefully. ... [edit] I think I see it already. - 2 - (pi2) (-1) ... and I also imagined I'd inverted pi/12 on the other side. I didn't forget. I actually remembered - but didn't do it. ...

But now I get h = 57.3.....

So let me set out my calculations in slightly greater detail.

h = sqrt(7.8696 * 0.0001 * 12 / pi) = 0.05483

n = pi/ h = 57.3......

Any guesses where the wrong turn is?
Sorry, only registered users may post in this forum.

Click here to login