A natural question to ask is how many iterations of the fixed-point iteration method are necessary to achieve a certain accuracy. We list the results of fixed-point iteration as applied on using the following piece of code written in Python:
import math x_0 = 1 iterations = range(0,40) for i in iterations: x_1 = math.log(3*x_0 + 1) print(x_1) x_0 = x_1
The results between the 15th and 21st iteration are:
It seems as if our solution is accurate to four decimal digits after 14 iterations. How many iterations are necessary so that our solution is accurate to 6 decimal digits?
We let denote the wanted accuracy. In this specific case we may let , because subtracting two six decimal digit numbers, where the first five are equal gives all zeros with the last number ranging from 1 to 9, e.g., . Since we want the same result wether we subtract by the largerst or the smallest value, we take the absolute value of the difference;
where denotes the iteration, and denotes the fixed point of .
From earlier posting on Fixed-Point Iteration, we know that
We now do a bit of algebra,
Substituting the last result into (1) yields,
Solving for gives,
Multiplying by negative one turns the inequality around such that,
Our final result is obtained by rewriting the last inequality as,
is now the maximum number of iterations required such that
. It turns out to be a somewhat pessimistic estimate.
We continue working with our initial problem.
By the Mean Value Theorem we know that for any there exists a number such that,
Since , we have that and and so,
Since and we have that for all . We therefore conclude that .
Calculating we get that maximum iterations are neccessary. This is a serious overestimate as our code gives us that the solution is accurate to decimal digits after iterations.