|
|
|
|
|
Nonlinear problems can be solved in a variety of ways in GetDP:
|
|
Nonlinear problems can be solved in a variety of ways in GetDP:
|
|
* By explicitly writing the nonlinear iterations in the `Resolution` operations
|
|
* By explicitly writing the nonlinear iterative loop in the `Resolution` operations with the built-in `While' function
|
|
* By using the built-in `IterativeLoop` function in `Resolution`
|
|
* By using the built-in `IterativeLoop` function in `Resolution`
|
|
* By using the built-in `IterativeLoopN` function in `Resolution`
|
|
* By using the built-in `IterativeLoopN` function in `Resolution`
|
|
|
|
|
|
The explicit specification in `Resolution` operations is the most flexible
|
|
The explicit specification of the iterative loop in `Resolution` operations is the most flexible
|
|
solution, but is also the most involved. `IterativeLoop` and `IterativeLoopN`
|
|
solution, but is also the most involved. `IterativeLoop` and `IterativeLoopN`
|
|
hide some of the complexity by exploiting the `JacNL` terms in formulations and
|
|
hide some of the complexity by automatically assessing the convergence: `IterativeLoop` uses an empirical algorithm for calculating the error whereas `IterativeLoopN` allows to specify
|
|
by automatically assessing the convergence. `IterativeLoop` uses an empirical
|
|
|
|
algorithm for calculating the error whereas `IterativeLoopN` allows to specify
|
|
|
|
in detail the error calculation and the allowed tolerances.
|
|
in detail the error calculation and the allowed tolerances.
|
|
|
|
|
|
## Nonlinear solvers
|
|
## Nonlinear solvers
|
... | @@ -75,33 +73,22 @@ the Newton-Raphson iteration becomes |
... | @@ -75,33 +73,22 @@ the Newton-Raphson iteration becomes |
|
```
|
|
```
|
|
with $`\mathbf{J}(\mathbf{x})_{ij} = \frac{\partial(\mathbf{A}(\mathbf{x})\mathbf{x})_i}{\partial\mathbf{x}_j}`$.
|
|
with $`\mathbf{J}(\mathbf{x})_{ij} = \frac{\partial(\mathbf{A}(\mathbf{x})\mathbf{x})_i}{\partial\mathbf{x}_j}`$.
|
|
|
|
|
|
|
|
### Picard method
|
|
|
|
|
|
|
|
The Picard method is a simple fixed method applied on the nonlinear function $`\mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x} - \mathbf{b}`$. Given an initial guess $`\mathbf{x}_0`$, Picard's method consists in computing the successive iterates $`\mathbf{x}_{k+1}`$ such that
|
|
|
|
```math
|
|
|
|
\mathbf{A}(\mathbf{x}_{k}) \mathbf{x}_{k+1} = \mathbf{b},
|
|
|
|
\quad k = 1, 2, ...
|
|
|
|
```
|
|
|
|
|
|
<!--
|
|
<!--
|
|
|
|
|
|
In the presence of material nonlinearities, the matrix $\mat{A}$ depends on the unknown field $\vec{x}$, and the system of equations becomes nonlinear:
|
|
For the exact solution, the residual defined by $\vec{r}(\vec{x})=\mat{A}(\vec{x}) \; \vec{x}-\vec{b}$ is zero. If after $p$ iterations, a satisfactory convergence is obtained, the iterative process is stopped. The convergence criterion could be based on some norm of the residual $\vec{r}(\vec{x}_p)$ or on the $p^\text{th}$ increment $\vec{\delta x}_p=\vec{x}_p-\vec{x}_{p-1}$. For example, it could be :
|
|
\begin{equation}
|
|
|
|
\mat{A}(\vec{x}) \; \vec{x}=\vec{b}.
|
|
|
|
\label{Sys}
|
|
|
|
\end{equation}
|
|
|
|
Therefore, the system must necessarily be solved iteratively. Starting from an initial guess vector $\vec{x}_0$ (e.g. a zero vector), the following calculated values $\vec{x}_1$, $\vec{x}_2$, ... are hoped to converge to the correct solution. For the exact solution, the residual defined by $\vec{r}(\vec{x})=\mat{A}(\vec{x}) \; \vec{x}-\vec{b}$ is zero. If after $p$ iterations, a satisfactory convergence is obtained, the iterative process is stopped. The convergence criterion could be based on some norm of the residual $\vec{r}(\vec{x}_p)$ or on the $p^\text{th}$ increment $\vec{\delta x}_p=\vec{x}_p-\vec{x}_{p-1}$. For example, it could be :
|
|
|
|
\begin{equation}
|
|
\begin{equation}
|
|
\frac{||\vec{\delta x}_p||_\infty}{||\vec{x}_p||_\infty} < \varepsilon,
|
|
\frac{||\vec{\delta x}_p||_\infty}{||\vec{x}_p||_\infty} < \varepsilon,
|
|
\end{equation}
|
|
\end{equation}
|
|
with $\varepsilon$ a small dimensionless number (e.g. $10^{-6}$).
|
|
with $\varepsilon$ a small dimensionless number (e.g. $10^{-6}$).
|
|
|
|
|
|
### Picard's method
|
|
|
|
|
|
|
|
Picard's iteration provides an easy way to handle the nonlinearity. In the Picard iterative process, a new approximation $\vec{x}_i$ is calculated by using a known, previous solution $\vec{x}_{i-1}$ in the nonlinear terms so that these terms become linear in the unknown $\vec{x}_i$. Therefore, the problem becomes:
|
|
|
|
\begin{equation}
|
|
|
|
\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i} = \vec{b}.
|
|
|
|
\label{Picard}
|
|
|
|
\end{equation}
|
|
|
|
|
|
|
|
The following iterative process summarizes the Picard method:
|
|
|
|
$\vec{x}_0=\vec{0}$; // Initialization
|
|
|
|
for $i=1,2,3,...$ {
|
|
|
|
$\vec{x}_{i} = \Big( \mat{A}(\vec{x}_{i-1}) \Big)^{-1} \vec{b}$; // Find the new $\vec{x}$
|
|
|
|
}
|
|
|
|
|
|
|
|
### Newton-Raphson method
|
|
### Newton-Raphson method
|
|
|
|
|
|
Usually, the Newton-Raphson method (NR-method) is used. In that case, a new approximation $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$ is obtained through the linearization of the residual vector $\vec{r}(\vec{x}_i)$ around the previous approximated value $\vec{x}_{i-1}$:
|
|
Usually, the Newton-Raphson method (NR-method) is used. In that case, a new approximation $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$ is obtained through the linearization of the residual vector $\vec{r}(\vec{x}_i)$ around the previous approximated value $\vec{x}_{i-1}$:
|
... | | ... | |