next up previous
Next: Summary of Newton-Raphson Up: Newton-Raphson Method Previous: Stopping Criterion


Error Estimate for the Newton-Raphson Method

In this section we estimate how the error varies from one iteration to the next. This gives us an idea on the speed of convergence of the method. Using the definition of absolute error in (1.6) we have the following relation between the exact value of the root $r$, the $n^{th}$ iterate $x_{n}$ and the error after $n$ iterations $\epsilon_{n}$,

\begin{displaymath}
r = x_{n} + \epsilon_{n}, \qquad x_{n} = r - \epsilon_{n}.
\end{displaymath}

Similarly after $n+1$ iterations we have

\begin{displaymath}
r = x_{n+1} + \epsilon_{n+1}, \qquad x_{n+1} = r - \epsilon_{n+1}.
\end{displaymath}

These expressions are substituted into the Newton-Raphson Method (1.11)

\begin{displaymath}
x_{n+1} = r - \epsilon_{n+1} = r - \epsilon_{n} - {F(r -
\epsilon_{n})\over F^{\prime}(r - \epsilon_{n})}.
\end{displaymath}

This may be rearranged to give
\begin{displaymath}
\epsilon_{n+1} = \epsilon_{n} + {F(r - \epsilon_{n})\over
F^{\prime}(r - \epsilon_{n})}.
\end{displaymath} (1.12)

To proceed we assume that the error at each step is small. Thus, if $\epsilon_{n}$ is small we may expand $F(x_{n})$ and $F^{\prime}(x_{n})$ in a Taylor series about $r$. The trick is to keep the first two non-zero terms. Thus,

\begin{displaymath}
F(r - \epsilon_{n}) = F(r) - \epsilon_{n}F^{\prime}(r) +
{\epsilon_{n}^{2}\over 2}F^{\prime\prime}(r) + \cdots.
\end{displaymath}

The first term on the right hand side is zero since $r$ is a root. Similarly we have

\begin{displaymath}
F^{\prime}(r - \epsilon_{n}) = F^{\prime}(r) -
\epsilon_{...
...\epsilon_{n}^{2}\over
2}F^{\prime\prime \prime}(r) + \cdots.
\end{displaymath}

Here we only need the first two terms as they are assumed to be non-zero. Hence, substituting into (1.12) we obtain

\begin{eqnarray*}
\epsilon_{n+1} & = & \epsilon_{n} + {-\epsilon_{n}F^{\prime}(...
...ilon_{n}^{2}F^{\prime\prime}(r)\over 2
F^{\prime}(r)} + \cdots
\end{eqnarray*}



To obtain the last line we expand the denominator using the binomial expansion and then neglect all terms that have a higher power of $\epsilon_{n}$ than the leading term. Thus, we neglect $\epsilon_{n}^{3}$ and all higher powers. Thus,
\begin{displaymath}
\epsilon_{n+1} \approx - {\epsilon_{n}^{2}F^{\prime\prime}(r)\over 2
F^{\prime}(r)}.
\end{displaymath} (1.13)

The error after $n+1$ iterations is proportional to square of the error after $n$ iterations. Once the relationship between $\epsilon_{n+1}$ and $\epsilon_{n}$ is known then the order of the iterative scheme (which is bascially the speed of convergence) is the power of $\epsilon_{n}$. Thus, Newton-Raphson is a second order scheme and we have fast convergence.

To illustrate the speed of convergence of the Newton-Raphson method, assume that the error at a particular iteration is $10^{-2}$. After the next step the error is proportional to $10^{-4}$ and after two iterations it is now proportional to $10^{-8}$. Thus, the error reduces extremely quickly.


next up previous
Next: Summary of Newton-Raphson Up: Newton-Raphson Method Previous: Stopping Criterion
Prof. Alan Hood
2000-02-01