next up previous contents
Next: Cholesky decomposition. Up: Annexes Previous: 1D Newton's search   Contents

Newton's method for non-linear equations

We want to find the solution $ x$ of the set of non-linear equations:

$\displaystyle r(x)= \left[ \begin{array}{c} r_1(x) \\  \vdots \\
 r_n(x) \end{array} \right] =0$ (13.31)

The algorithm is the following:
  1. Choose $ x_0$
  2. Calculate a solution $ \delta_k$ to the Newton equation:

    $\displaystyle J(x_k) \delta_k = - r(x_k)$ (13.32)

  3. $ x_{k+1}=x_k+\delta_k$
We use a linear model to derive the Newton step (rather than a quadratical model as in unconstrained optimization) because the linear model normally as a solution and yields an algorithm with fast convergence properties (Newton's method has superlinear convergence when the Jacobian $ J$ is a continuous function and local quadratical convergence when $ J$ is Liptschitz continous). Newton's method for unconstrained optimization can be derived by applying Equation 13.32 to the set of nonlinear equations $ \nabla f(x)=0$).

Frank Vanden Berghen 2004-04-19