Welcome! Log In Create A New Profile

Advanced

Richardson Extrapolation

Posted by sheepapple 
Announcements Last Post
Announcement SoC Curricula 09/30/2017 01:08PM
Announcement Demarcation or scoping of examinations and assessment 02/13/2017 07:59AM
Announcement School of Computing Short Learning Programmes 11/24/2014 08:37AM
Announcement Unisa contact information 07/28/2011 01:28PM
Richardson Extrapolation
March 26, 2011 11:54AM
OK, so this Richardson Extrapolation is yet another recipe. I can't imagine they expect us to learn 12 formulas on page 271/272. It's rather ridiculous. And derivation from first principals is largely undocumented in the textbook.

Anyways, I'm using the first formula at the top of page 272 to do Question 2 of assignment 3. How do you find O(h2)?
avatar Re: Richardson Extrapolation
March 26, 2011 01:49PM
Vague memory says you go back to your Taylor Series, solve, cancellation reduces power of h, and your error term comes out Big-Oh of the highest h term you have.

So it's one power less than the lowest derivative of the part of the expression that evaluates as the error term.
Re: Richardson Extrapolation
March 27, 2011 02:36PM
Hmm. So just take the Taylor series and use that as Big-Oh?

I've been drawing up the Richardsons Extrapolation Table. Its f#cking tedious.

All this computationally intensive stuff done by hand is driving me nuts. I keep writing Matlab programs to do it, but in the exam we wont have that luxury. I'm worried.
avatar Re: Richardson Extrapolation
March 27, 2011 03:05PM
The term that will have the biggest effect on the value of the infinite series that continues from the end of the point of certainty/ the beginning of the unknown is the power of h used. Your higher derivatives keep getting divided down by factorials to vanishing point. (I mean if you're dividing by 15!, what number is that? 20! is millions of millions of millions ... or millionths here.).

h is some proper fraction, so h15 is an even smaller fraction (115 over {something}) The h's vanish, too.

Ah! OK so our Big-Oh term is WRT a parameter, not the variable. One adjusts h/ tunes it. It's all you can tune.

Obviously I need to do some reading here...
Re: Richardson Extrapolation
March 27, 2011 08:23PM
I don't think there's any getting away from drawing up a divided differences table as shown on page 270. You keep recursing like you do with divided differences until the error up to n digits disappears. I will phone the lecturer in the morning.
Re: Richardson Extrapolation
March 27, 2011 08:56PM
Something else I noticed is that the book rounds off, and does not truncate.
avatar Re: Richardson Extrapolation
March 27, 2011 08:56PM
Yes. I need to do a huge amount of revision. In the broadest possible terms you use either an approximation of f , and take the exact derivative of that, or you use the exact f and take an approximate derivative. ( ¿Somewhere or other this is handy, but now I see it's not, here?)

If you're using an approximating polynomial with any method, then in principle e = f - p.

If you're using an approximation "further up" you need to talk in terms of E = True - Approx.
... or This - Prev if you're iterating. ¿Maybe you need to be thinking in terms of Relative Error?
avatar Re: Richardson Extrapolation
April 02, 2011 03:13PM
I think I'm on the right track here with that "TRUE - APPR" basic principles version of error.

Look at this page.. www.mathcs.emory.edu/ccs/ccs215/integral/node7.html

Your estimate of f(x) is a truncated Taylor Series. Simplest case is assuming it has just two derivatives. Now if you do D(h)f(x) simply by substituting into a first principles "limit definition" (actually another approximation, eh?) Your f(x) terms cancel out. So do your h's (easy to see on the page in question). You're left with f'(x) + 1/2 f''("xi"winking smileyh.

Now if you take that D(h)f(x) and remember that it's your current approximation, you can apply a basic principles definition of error (as above). Note that the f'(x) terms in D(h)f(x) stands for the exact value of f'(x).

OK so True - Approx = f'(x) - D(h)f(x)

right?

Now see that your f' terms subtract away, so you're just left with minus the 1/2 f''("xi"winking smileyh term.

Now the only thing that can vary in that expression is the h, right? So you can do a Big-Oh on it and simplify it to hellengone. Forget all about the constant 1/2 f''("xi"winking smiley, and what you have left is O(h).

I've just pointed out the things I felt I needed to notice to start to get this properly when I read the page I mention. Read the page to fill in the gaps you find here. You'll see it's slightly more involved for taking your Taylor Series approximation to next-level-sh1t, but really the same basic skeleton underlies it.

In simplest terms error is just True Value - Approximate Value.
In the next-simplest terms (more detailed version), it turns out to be a fairly simple Big-Oh order of error if you "ignore" all the Greek constants.

And of course in less simple terms it's the result of subtracting, canceling, and ending up with a whole lot of Greek.

No, I'm not yet on the Richardson Extrapolation part yet, so this ain't much of an answer yet.
Re: Richardson Extrapolation
April 03, 2011 11:13AM
I basically did my Richardsons stuff exactly like numericalmethodsguy.

In a nutshell, I picked the formula from page 271/272 based on the derivative I that was needed. The assignment wanted the second derivative so I took the last formula on page 271. First use h=0.1 instructued. Plug the numbers in (ignore O(h)) for now. Then do another calculation with h = h / 2.

Then for O(h) I used equation 5.15 on page 268.

My final answer was 1.79781

Let me know if I have the wrong end of the stick.
avatar Re: Richardson Extrapolation
April 03, 2011 02:18PM
Well for Big-Oh I don't think you should be using equations. The idea of Big-Oh is that you simplify your error term down to its order. So if your h was 0.1 then your error for an ordinary forward difference approximation is O(0.1). Nice and simple. (And if you were using the central difference, your error is simply O(h2) = O(0.12)=O(0.01). ) Obviously if your method has given you even better accuracy you simply express that in a similar manner, plugging in your actual h value.

However, it's likeliest that they wanted actual error, and not Big-Oh, isn't it?

Off at a tangent, a good estimate for f doesn't give you a good estimate for f'. If your polynomial stays close to f, but oscillates through its line, so to speak, the derivatives can vary quite widely, even with a close fit between f and p. Quite a lot of the online resources show a graph they've all copied off one another that shows the point more tersely than words can. Just for interest's sake, I mention it.
Sorry, only registered users may post in this forum.

Click here to login