... | ... | @@ -33,8 +33,8 @@ where $`\mathbf{F}(\mathbf{x})`$ is a (nonlinear) function from $`R^N`$ to $`R^N |
|
|
Given an initial guess $`\mathbf{x}_0`$, Newton's method consists in computing
|
|
|
the successive iterates
|
|
|
```math
|
|
|
\mathbf{x}_{k+1} = \mathbf{x}_k - \mathbf{J}^{-1}(\mathbf{x}_k) \mathbf{F}(\mathbf{x}_k),
|
|
|
\quad k = 0, 1, 2, ...
|
|
|
\mathbf{x}_k = \mathbf{x}_{k-1} - \mathbf{J}^{-1}(\mathbf{x}_{k-1}) \mathbf{F}(\mathbf{x}_{k-1}),
|
|
|
\quad k = 1, 2, ...
|
|
|
```
|
|
|
where
|
|
|
```math
|
... | ... | @@ -50,42 +50,42 @@ where |
|
|
\end{array} \right]
|
|
|
```
|
|
|
is the Jacobian matrix, i.e. $`\mathbf{J}(\mathbf{x})_{ij} = \frac{\partial\mathbf{F}(\mathbf{x})_i}
|
|
|
{\partial\mathbf{x}_j}`$. In practice the Jacobian matrix $`\mathbf{J}(\mathbf{x}_k)`$ is not
|
|
|
{\partial\mathbf{x}_j}`$. In practice the Jacobian matrix $`\mathbf{J}(\mathbf{x}_{k-1})`$ is not
|
|
|
inverted and at each iteration the following linear system is solved instead in
|
|
|
terms of the unknown $`\mathbf{\delta x}_{k+1} := (\mathbf{x}_{k+1} - \mathbf{x}_k)`$:
|
|
|
terms of the unknown $`\mathbf{\delta x}_k := (\mathbf{x}_k - \mathbf{x}_{k-1})`$:
|
|
|
```math
|
|
|
\mathbf{J}(\mathbf{x}_k) \mathbf{\delta x}_{k+1} = -\mathbf{F}(\mathbf{x}_k).
|
|
|
\mathbf{J}(\mathbf{x}_{k-1}) \mathbf{\delta x}_k = -\mathbf{F}(\mathbf{x}_{k-1}).
|
|
|
```
|
|
|
Equivalently, one can solve
|
|
|
```math
|
|
|
\mathbf{J}(\mathbf{x}_k) \mathbf{x}_{k+1}
|
|
|
= -\mathbf{F}(\mathbf{x}_k) + \mathbf{J}(\mathbf{x}_k) \mathbf{x}_k ,
|
|
|
\mathbf{J}(\mathbf{x}_{k-1}) \mathbf{x}_k
|
|
|
= -\mathbf{F}(\mathbf{x}_{k-1}) + \mathbf{J}(\mathbf{x}_{k-1}) \mathbf{x}_{k-1} ,
|
|
|
```
|
|
|
in terms of the original unknown $`\mathbf{x}_{k+1}`$. A relaxation factor
|
|
|
$`\gamma_{k+1}`$ can be introduced at each iteration, leading to a modified new
|
|
|
(relaxed) iterate $`\tilde{\mathbf{x}}_{k+1} := \mathbf{x}_k + \gamma_{k+1}
|
|
|
\mathbf{\delta x}_{k+1}`$.
|
|
|
in terms of the original unknown $`\mathbf{x}_k`$. A relaxation factor
|
|
|
$`\gamma_k`$ can be introduced at each iteration, leading to a modified new
|
|
|
(relaxed) iterate $`\tilde{\mathbf{x}}_k := \mathbf{x}_{k-1} + \gamma_k
|
|
|
\mathbf{\delta x}_k`$.
|
|
|
|
|
|
When the nonlinear function $`\mathbf{F}(\mathbf{x})`$ has the particular form
|
|
|
$`\mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x} - \mathbf{b}`$ (i.e.
|
|
|
involves a square $`N\times N`$ matrix $`\mathbf{A}`$ whose entries depend on $`\mathbf{x}`$),
|
|
|
the Newton-Raphson iteration becomes
|
|
|
```math
|
|
|
\mathbf{J}(\mathbf{x}_k) \mathbf{\delta x}_{k+1}
|
|
|
= \mathbf{b} - \mathbf{A}(\mathbf{x}_k) \mathbf{x}_k
|
|
|
\mathbf{J}(\mathbf{x}_{k-1}) \mathbf{\delta x}_k
|
|
|
= \mathbf{b} - \mathbf{A}(\mathbf{x}_{k-1}) \mathbf{x}_{k-1}
|
|
|
```
|
|
|
with $`\mathbf{J}(\mathbf{x})_{ij} = \frac{\partial(\mathbf{A}(\mathbf{x})\mathbf{x})_i}{\partial\mathbf{x}_j}`$. Equivalently, one can solve
|
|
|
```math
|
|
|
\mathbf{J}(\mathbf{x}_k) \mathbf{x}_{k+1}
|
|
|
= \mathbf{b} - \mathbf{A}(\mathbf{x}_k) \mathbf{x}_k + \mathbf{J}(\mathbf{x}_k) \mathbf{x}_k .
|
|
|
\mathbf{J}(\mathbf{x}_{k-1}) \mathbf{x}_k
|
|
|
= \mathbf{b} - \mathbf{A}(\mathbf{x}_{k-1}) \mathbf{x}_{k-1} + \mathbf{J}(\mathbf{x}_{k-1}) \mathbf{x}_{k-1} .
|
|
|
```
|
|
|
|
|
|
### Picard method
|
|
|
|
|
|
The Picard method is a simple fixed method applied on the equation $`\mathbf{F}(\mathbf{x})=0`$ when the nonlinear function is of the form $`\mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x} - \mathbf{b}`$. Given an initial guess $`\mathbf{x}_0`$, Picard's method consists in computing the successive iterates $`\mathbf{x}_{k+1}`$ such that
|
|
|
The Picard method is a simple fixed method applied on the equation $`\mathbf{F}(\mathbf{x})=0`$ when the nonlinear function is of the form $`\mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x} - \mathbf{b}`$. Given an initial guess $`\mathbf{x}_0`$, Picard's method consists in computing the successive iterates $`\mathbf{x}_k`$ such that
|
|
|
```math
|
|
|
\mathbf{A}(\mathbf{x}_{k}) \mathbf{x}_{k+1} = \mathbf{b},
|
|
|
\quad k = 0, 1, 2, ...
|
|
|
\mathbf{A}(\mathbf{x}_{k-1}) \mathbf{x}_k = \mathbf{b},
|
|
|
\quad k = 1, 2, ...
|
|
|
```
|
|
|
|
|
|
### Convergence assessment
|
... | ... | |