Nonlinear problems can be solved in a variety of ways in GetDP:

- By explicitly writing the nonlinear solver in the
`Resolution`

operations, using the built-in`While`

function - By using the built-in
`IterativeLoop`

function in`Resolution`

- By using the built-in
`IterativeLoopN`

function in`Resolution`

All the techniques are based on a linearization of the original nonlinear problem, leading to an iterative algorithm where a linear system is solved at each step.

The explicit specification of the iterative algorithm in `Resolution`

operations is the most flexible solution, but is also the most involved. `IterativeLoop`

and `IterativeLoopN`

hide some of the complexity by automatically assessing the convergence: `IterativeLoop`

uses an empirical algorithm for calculating the error whereas `IterativeLoopN`

allows to specify
in detail the error calculation and the allowed tolerances.

## Basic concepts

For a linear problem, the finite element method leads to the solution of a linear system of algebraic equations of the form:

`\mathbf{A} \mathbf{x} = \mathbf{b},`

where `\mathbf{A}`

is a square matrix of size `N\times N`

(e.g. the stiffness matrix of the
problem), `\mathbf{b}`

takes into account source terms and `\mathbf{x}`

is the
vector of unknowns to be computed.

For a general nonlinear problem on the other hand, the goal is to find `\mathbf{x}`

solution to

`\mathbf{F}(\mathbf{x}) = 0,`

where `\mathbf{F}(\mathbf{x})`

is a (nonlinear) function from `R^N`

to `R^N`

.

### Newton-Raphson method

Given an initial guess `\mathbf{x}_0`

, Newton's method consists in computing
the successive iterates

```
\mathbf{x}_k = \mathbf{x}_{k-1} - \mathbf{J}^{-1}(\mathbf{x}_{k-1}) \mathbf{F}(\mathbf{x}_{k-1}),
\quad k = 1, 2, ...
```

where

```
\mathbf{J}(\mathbf{x})
:= \left[ \frac{\partial\mathbf{F}(\mathbf{x})}{\partial x_1} \cdots
\frac{\partial\mathbf{F}(\mathbf{x})}{\partial x_N} \right]
= \left[ \begin{array}{ccc}
\frac{\partial\mathbf{F}(\mathbf{x})_1}{\partial x_1} & \cdots &
\frac{\partial\mathbf{F}(\mathbf{x})_1}{\partial x_N} \\
\vdots & \ddots & \vdots \\
\frac{\partial\mathbf{F}(\mathbf{x})_N}{\partial x_1} & \cdots &
\frac{\partial\mathbf{F}(\mathbf{x})_N}{\partial x_N}
\end{array} \right]
```

is the Jacobian matrix, i.e. ```
\mathbf{J}(\mathbf{x})_{ij} = \frac{\partial\mathbf{F}(\mathbf{x})_i}
{\partial\mathbf{x}_j}
```

. In practice the Jacobian matrix `\mathbf{J}(\mathbf{x}_{k-1})`

is not
inverted and at each iteration the following linear system is solved instead in
terms of the unknown `\mathbf{\delta x}_k := (\mathbf{x}_k - \mathbf{x}_{k-1})`

:

`\mathbf{J}(\mathbf{x}_{k-1}) \mathbf{\delta x}_k = -\mathbf{F}(\mathbf{x}_{k-1}).`

Equivalently, one can solve

```
\mathbf{J}(\mathbf{x}_{k-1}) \mathbf{x}_k
= -\mathbf{F}(\mathbf{x}_{k-1}) + \mathbf{J}(\mathbf{x}_{k-1}) \mathbf{x}_{k-1} ,
```

in terms of the original unknown `\mathbf{x}_k`

.

The convergence of the Newton-Raphson method is by no means guaranteed: it depends on regularity of `\mathbf{F}`

and the choice of the initial guess `\mathbf{x}_0`

. In case of strong non-linearity and/or unfavorable initialization conditions, divergence is not unlikely. Sufficiently close to the solution, the convergence is quadratic. In order to ensure or accelerate convergence, relaxation techniques may be applied: a relaxation factor `\gamma_k`

can be introduced at each iteration, leading to a modified new
(relaxed) iterate ```
\tilde{\mathbf{x}}_k := \mathbf{x}_{k-1} + \gamma_k
\mathbf{\delta x}_k
```

. The relaxation factor is usually chosen in `]0,1]`

.

When the nonlinear function `\mathbf{F}(\mathbf{x})`

has the particular form
`\mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x} - \mathbf{b}(\mathbf{x})`

(i.e.
involves a square `N\times N`

matrix `\mathbf{A}`

whose entries depend on `\mathbf{x}`

),
the Newton-Raphson iteration becomes

```
\mathbf{J}(\mathbf{x}_{k-1}) \mathbf{\delta x}_k
= \mathbf{b}(\mathbf{x}_{k-1}) - \mathbf{A}(\mathbf{x}_{k-1}) \mathbf{x}_{k-1}
```

with

```
\mathbf{J}(\mathbf{x})
= \frac{\partial(\mathbf{A}(\mathbf{x})\mathbf{x})}{\partial\mathbf{x}}
- \frac{\partial\mathbf{b}(\mathbf{x})}{\partial\mathbf{x}}
= \frac{\partial\mathbf{A}(\mathbf{x})}{\partial\mathbf{x}} \mathbf{x}
+ \mathbf{A}(\mathbf{x})
- \frac{\partial\mathbf{b}(\mathbf{x})}{\partial\mathbf{x}}.
```

Equivalently, one can solve

```
\mathbf{J}(\mathbf{x}_{k-1}) \mathbf{x}_k
= \mathbf{b}(\mathbf{x}_{k-1}) - \mathbf{A}(\mathbf{x}_{k-1}) \mathbf{x}_{k-1} + \mathbf{J}(\mathbf{x}_{k-1}) \mathbf{x}_{k-1} .
```

### Picard method

When the nonlinear function is of the form `\mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x} - \mathbf{b}(\mathbf{x})`

, given an initial guess `\mathbf{x}_0`

, Picard's method consists in computing the successive iterates `\mathbf{x}_k`

such that

```
\mathbf{A}(\mathbf{x}_{k-1}) \mathbf{x}_k = \mathbf{b}(\mathbf{x}_{k-1}),
\quad k = 1, 2, ...
```

### Convergence

For the exact solution `\mathbf{F}(\mathbf{x})`

is zero. In practice, the iterations are stopped if after `p`

iterations a sufficiently small value of the residual in some norm is obtained, e.g.

```
||\mathbf{F}(\mathbf{x}_p)|| < \varepsilon_\text{abs} \quad\text{or}\quad
\frac{||\mathbf{F}(\mathbf{x}_p)||}{||\mathbf{F}(\mathbf{x}_0)||} < \varepsilon_\text{rel} ,
```

where `\varepsilon_\text{abs}`

and `\varepsilon_\text{rel}`

are small numbers (e.g. `10^{-6}`

).

Another stopping criterion can be defined on the `p^\text{th}`

increment `\mathbf{\delta x}_p=\mathbf{x}_p-\mathbf{x}_{p-1}`

. For example, it could be:

`\frac{||\mathbf{\delta x}_p||}{||\mathbf{x}_p||} < \varepsilon.`

## User-defined nonlinear solver

The Newton-Raphson or Picard iterations can be written directly in the `Operations`

of a `Resolution`

, using a `While`

loop. Convergence can be assessed with the `GetResidual`

function.

Examples:

- Nonlinear magnetic law in b- and h-conforming magnetostatics
- Nonlinear magnetic law for magnetodynamics
- Nonlinear electric law for magnetodynamics

##
`IterativeLoop`

function

Built-in This section will explain the `GenerateJac`

and `SolveJac`

functions, that automate parts of the implementation of Newton-Raphson and Picard iterations in conjunction with the `JacNL`

equation term specifier and the `IterativeLoop`

resolution function.

##
`IterativeLoopN`

function

Built-in This section will explain the the `IterativeLoopN`

resolution function. A typical usage of `IterativeLoopN`

is shown in the following piece of code:

```
Resolution {
{ Name ElectroThermal;
System {
{ Name SystemT; NameOfFormulation FormulationT; }
{ Name SystemV; NameOfFormulation FormulationV; }
}
Operation {
InitSolution[SystemT];
InitSolution[SystemV];
IterativeLoopN[NbrMaxIter, NL_Relax,
System { { SystemT, ReltolT, AbstolT, Solution LinfNorm }
{ SystemV, ReltolV, AbstolV, Solution MeanL2Norm } }
PostOperation { { V_Top, ReltolP, AbstolP, MeanL1Norm } } ]
{
Generate[SystemV];
Solve[SystemV];
Generate[SystemT];
Solve[SystemT];
}
SaveSolution[SystemV];
SaveSolution[SystemT];
}
}
}
```

The parameters are:

- Maximum iterations: Maximum number of iterations
- NL_Relax: Relaxation factor (multiplies the iterative correction \deltax for Newton's method)
- System / PostOperation: Keyword System indicates that the quantity of interest are all DOFs of the given system. In addition or as an alternative you can specify post-operation results as quantities of interest by the keyword PostOperation.
- System / post-operation name: Here the name of the system / post-operation of interest is declared.
- Relative tolerance: Relative tolerance for the quantity of interest.
- Absolute tolerance: Absolute tolerance for the quantity of interest.
- Assessed object: Possible choices in case of a specified system: Solution, Residual, RecalcResidual. For specified post-operations this has to be omitted. Residual and RecalcResidual can only be used for Newton's method. Residual assesses the residual from the last iteration whereas RecalcResidual calculates the residual once again after each iteration. This means that with Residual usually one extra iteration is performed, but RecalcResidual causes higher computational effort per iteration.
- Error norm: Type of the error norm. Possible choices are: L1Norm, MeanL1Norm, L2Norm, MeanL2Norm, LinfNorm (see definitions below).

### Comparison of system vs. post-operation

- When a system is specified then all DOFs of this system are assessed, but they do not necessarily correspond to the values of the quantities of the formulation directly (e.g. in case of higher order basis functions).
- In case of several quantities in a formulation they can be assessed individually by a specified post-operation.
- With post-operations it's also possible to assess different regions individually.
- Also derived quantities can be assessed with a specified post-operation.
- Performing a post-operation each iteration can be time consuming for large problems.

### Error Norm

For each quantity of interest `x_i`

an error ratio `q_i`

is calculated:

`q_i = \frac{|\Delta x_i|}{\epsilon_{abs} + \epsilon_{rel}|x_i|}`

As the error ratio is calculated for each quantity of interest (e.g. one q per DOF of a given system) we need a norm in order to assess the overall error ratio. There are several norm types available for `IterativeLoopN`

:

- L1 Norm:
`\|q\|_{L1} = \sum_{i=1}^n |q_i|`

- Mean L1 Norm:
`\|q\|_{Mean L1} = \frac 1n \sum_{i=1}^n |q_i|`

- L2 Norm:
`\|q\|_{L2} = \sqrt{\sum_{i=1}^n \left( q_i \right)^2}`

- Mean L2 Norm:
`\|q\|_{Mean L2} = \sqrt{\frac 1n \sum_{i=1}^n \left( q_i \right)^2}`

- Linfinity Norm:
`\|q\|_\infty = \max \left( |q_i| \right)`

### General Hints

- If you specify several Systems and/or post-operations then usually only the largest
`\|q\|`

is printed. If the verbosity level is set higher than 5 then`\|q\|`

is printed for each system or post-operation. - If you start GetDP within Gmsh then the largest $`|q|`$ (ILmaxErrorRatio) can be plotted nicely.

### Complete Example

Here a complete example is given for a electro-thermal problem. We have an object consisting of two materials. On top a uniform current source is applied. The dissipated power is heating the body. The bottom of the body is kept at a fix temperature. The electrical conductivity is temperature dependent.

- Geometry: ElectroThermal_Sim.geo
- Problem definition: ElectroThermal_Sim.pro