|
|
|
|
|
Nonlinear problems can be solved in a variety of ways in GetDP:
|
|
|
* By explicitly writing the nonlinear iterations in the `Resolution` operations
|
|
|
* By using the `IterativeLoop` function in `Resolution`
|
|
|
* By using the `IterativeLoopN` function in `Resolution`
|
|
|
* By using the built-in `IterativeLoop` function in `Resolution`
|
|
|
* By using the built-in `IterativeLoopN` function in `Resolution`
|
|
|
|
|
|
The explicit specification in `Resolution` operations is the most flexible
|
|
|
solution, but is also the most involved. `IterativeLoop` and `IterativeLoopN`
|
|
|
hide some of the complexity by exploiting the `JacNL` terms in formulations and
|
|
|
by automatically assessing the convergence. `IterativeLoop` uses an empirical
|
|
|
algorithm for calculating the error whereas `IterativeLoopN` allows to specify
|
|
|
in detail the error calculation and the allowed tolerances.
|
|
|
|
|
|
## Nonlinear solvers
|
|
|
|
|
|
For linear problems, the finite element method leads to the solution of linear
|
|
|
systems of algebraic equations of the form:
|
|
|
```math
|
|
|
\mathbf{A} \mathbf{x} = \mathbf{b},
|
|
|
```
|
|
|
where $\mathbf{A}$ is a square matrix of size $`N`$ (e.g. the stiffness matrix of the
|
|
|
problem), $\mathbf{b}$ takes into account source terms and $\mathbf{x}$ is the
|
|
|
vector of unknowns to be computed.
|
|
|
|
|
|
For a general nonlinear problem, we want to find $`\mathbf{x}`$ solution to
|
|
|
```math
|
|
|
\mathbf{F}(\mathbf{x}) = 0,
|
|
|
```
|
|
|
where \mathbf{F} is a (nonlinear) function from $`R^N`$ to $`R^N`$.
|
|
|
|
|
|
### Newton-Raphson method
|
|
|
|
|
|
Given an initial guess $`\mathbf{x}_0`$, Newton's method consists in computing
|
|
|
the successive iterates
|
|
|
```math
|
|
|
\mathbf{x}_{k+1} = \mathbf{x}_{k} - \mathbf{J}^{-1}(\mathbf{x}_k) F(\mathbf{x}_k),
|
|
|
\quad k = 1, 2, \dots
|
|
|
```
|
|
|
where
|
|
|
```math
|
|
|
\mathbf{J}=\left[ \frac{\partial\mathbf{F}}{\partial x_1} \cdots
|
|
|
\frac{\partial\mathbf{F}}{\partial x_N} \right]
|
|
|
=\left[ \begin{array}{ccc}
|
|
|
\frac{\partial\mathbf{F}_1}{\partial x_1} & \cdots &
|
|
|
\frac{\partial\mathbf{F}_1}{\partial x_N} \\
|
|
|
\vdots & \ddots & \vdots \\
|
|
|
\frac{\partial\mathbf{F}_N}{\partial x_1} & \cdots &
|
|
|
\frac{\partial\mathbf{F}_N}{\partial x_N} \right]
|
|
|
```
|
|
|
is the Jacobian matrix, i.e. $`\mathbf{J}_{ij} = \fraction{\partial\mathbf{F}_i}
|
|
|
{\partial\mathbf{x}_j}`$. In practice the Jacobian matrix $\mathbf{J}$ is not
|
|
|
inverted and at each iteration the following linear system is solved instead:
|
|
|
```math
|
|
|
\mathbf{J}(\mathbf{x}_k) ( \mathbf{x}_{k+1} - \mathbf{x}_{k} ) = -\mathbf{F}(\mathbf{x}_k),
|
|
|
```
|
|
|
in terms of a the unknown $`(\mathbf{x}_{k+1} - \mathbf{x}_{k})`$. Equivalently, we
|
|
|
can solve
|
|
|
```math
|
|
|
\mathbf{J}(\mathbf{x}_k) \mathbf{x}_{k+1} = -\mathbf{F}(\mathbf{x}_k) +
|
|
|
\mathbf{J}(\mathbf{x}_k) \mathbf{x}_{k} .
|
|
|
```
|
|
|
|
|
|
When the nonlinear function $\mathbf{F}(\mathbf{x})$ has the particular form
|
|
|
$\mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x}$, the Jacobian
|
|
|
matrix $\mathbf{J}$ writes:
|
|
|
|
|
|
<!--
|
|
|
|
|
|
In the presence of material nonlinearities, the matrix $\mat{A}$ depends on the unknown field $\vec{x}$, and the system of equations becomes nonlinear:
|
|
|
\begin{equation}
|
|
|
\mat{A}(\vec{x}) \; \vec{x}=\vec{b}.
|
|
|
\label{Sys}
|
|
|
\end{equation}
|
|
|
Therefore, the system must necessarily be solved iteratively. Starting from an initial guess vector $\vec{x}_0$ (e.g. a zero vector), the following calculated values $\vec{x}_1$, $\vec{x}_2$, ... are hoped to converge to the correct solution. For the exact solution, the residual defined by $\vec{r}(\vec{x})=\mat{A}(\vec{x}) \; \vec{x}-\vec{b}$ is zero. If after $p$ iterations, a satisfactory convergence is obtained, the iterative process is stopped. The convergence criterion could be based on some norm of the residual $\vec{r}(\vec{x}_p)$ or on the $p^\text{th}$ increment $\vec{\delta x}_p=\vec{x}_p-\vec{x}_{p-1}$. For example, it could be :
|
|
|
\begin{equation}
|
|
|
\frac{||\vec{\delta x}_p||_\infty}{||\vec{x}_p||_\infty} < \varepsilon,
|
|
|
\end{equation}
|
|
|
with $\varepsilon$ a small dimensionless number (e.g. $10^{-6}$).
|
|
|
|
|
|
### Picard's method
|
|
|
|
|
|
Picard's iteration provides an easy way to handle the nonlinearity. In the Picard iterative process, a new approximation $\vec{x}_i$ is calculated by using a known, previous solution $\vec{x}_{i-1}$ in the nonlinear terms so that these terms become linear in the unknown $\vec{x}_i$. Therefore, the problem becomes:
|
|
|
\begin{equation}
|
|
|
\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i} = \vec{b}.
|
|
|
\label{Picard}
|
|
|
\end{equation}
|
|
|
|
|
|
The following iterative process summarizes the Picard method:
|
|
|
$\vec{x}_0=\vec{0}$; // Initialization
|
|
|
for $i=1,2,3,...$ {
|
|
|
$\vec{x}_{i} = \Big( \mat{A}(\vec{x}_{i-1}) \Big)^{-1} \vec{b}$; // Find the new $\vec{x}$
|
|
|
}
|
|
|
|
|
|
### Newton-Raphson method
|
|
|
|
|
|
Usually, the Newton-Raphson method (NR-method) is used. In that case, a new approximation $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$ is obtained through the linearization of the residual vector $\vec{r}(\vec{x}_i)$ around the previous approximated value $\vec{x}_{i-1}$:
|
|
|
\begin{equation}
|
|
|
\vec{r}(\vec{x}_i)=\vec{r}(\vec{x}_{i-1}+\vec{\delta x}_i)=0,
|
|
|
\end{equation}
|
|
|
\begin{equation}
|
|
|
\vec{r}(\vec{x}_{i-1})+(\frac{\partial \vec{r}}{\partial \vec{x}})_{i-1} \vec{\delta x}_i + o(\vec{\delta \vec{x}}_i^2)= 0.
|
|
|
\end{equation}
|
|
|
By neglecting the higher-order terms $o(\vec{\delta \vec{x}}_i^2)$, a system of linear algebraic equations in $\vec{\delta x}_i$ is deduced:
|
|
|
\begin{equation}
|
|
|
(\frac{\partial \vec{r}}{\partial \vec{x}})_{i-1} \vec{\delta x}_i = -\vec{r}(\vec{x}_{i-1}).
|
|
|
\label{NR}
|
|
|
\end{equation}
|
|
|
|
|
|
Remark: The derivative of an $m \times 1$-column vector $\vec{y}$ with respect to an $n \times 1$-column vector $\vec{x}$ is an $m \times n$-matrix whose $(j,k)^\text{th}$ element is given by:
|
|
|
\begin{equation*}
|
|
|
(\frac{\partial \vec{y}}{\partial \vec{x}})_{j,k}=\frac{\partial \vec{y}_j}{\partial \vec{x}_k}.
|
|
|
\end{equation*}
|
|
|
|
|
|
For the system written in \eqref{Sys}, with the right-hand side $\vec{b}$ assumed known, the formulation \eqref{NR} becomes:
|
|
|
\begin{equation}
|
|
|
(\frac{\partial \big( \mat{A}(\vec{x}) \; \vec{x} \big)}{\partial \vec{x}})_{i-1} \vec{\delta x}_i = \vec{b}-\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i-1}.
|
|
|
\label{SysNR}
|
|
|
\end{equation}
|
|
|
|
|
|
Each NR-iteration provides an unique increment solution $\vec{\delta x}_i$ which depends on values taken from the previous iteration step $i-1$. Nevertheless, the convergence is by no means guaranteed. In case of strong non-linearity and/or unfavorable initialization conditions, divergence is not unlikely. Sufficiently close to the solution, the convergence of the NR-method is quadratic. In order to ensure or accelerate convergence, relaxation techniques may be applied:
|
|
|
\begin{equation}
|
|
|
\vec{x}_i=\vec{x}_{i-1}+\gamma_i \; \vec{\delta x}_i,
|
|
|
\end{equation}
|
|
|
with $\gamma_i$, the so-called ''relaxation factor'', which should be chosen judiciously (typically between $0$ and $1$).
|
|
|
|
|
|
The matrix which appears in the system \eqref{SysNR} is called the ''Jacobian matrix'' (or ''Tangent stiffness matrix'') of the system and is noted $\Ja$:
|
|
|
\begin{equation}
|
|
|
\Ja=(\frac{\partial \big( \mat{A}(\vec{x}) \; \vec{x} \big)}{\partial \vec{x}}).
|
|
|
\label{Jac}
|
|
|
\end{equation}
|
|
|
|
|
|
The NR-method can be summarized by the following iterative loop:
|
|
|
$\vec{x}_0=\vec{0}$; // Initialization
|
|
|
for $i=1,2,3,...$ {
|
|
|
$\vec{\delta x}_i = \Big( \Ja(\vec{x}_{i-1}) \Big)^{-1} \big( \vec{b}-\mat{A}(\vec{x}_{i-1}) \; \vec{x}_{i-1} \big)$; // Find the new $\vec{\delta x}$
|
|
|
$\vec{x}_i=\vec{x}_{i-1}+\gamma_i \; \vec{\delta x}_i$; // Update the value of $\vec{x}$
|
|
|
}
|
|
|
|
|
|
== Implementation of nonlinear problems in GetDP ==
|
|
|
|
|
|
As illustrated above, iterative loops are essential for nonlinear analysis. In GetDP, a loop is launched thanks to the <code>IterativeLoop</code> command and has to be defined in an appropriate level of the recursive resolution operations:
|
|
|
<pre>
|
|
|
IterativeLoop [exp-cst,exp,exp-cst<,exp-cst>] { resolution-op }
|
|
|
</pre>
|
|
|
|
|
|
The parameters are: the maximum number of iterations $i_{max}$ (if no convergence), the relative error $\varepsilon$ to achieve and the relaxation factor $\gamma_i$ that multiplies the iterative correction. (The optional parameter is a flag for testing purposes.) The <code>resolution-op</code> field contains the operations that have to be performed at each iteration.
|
|
|
|
|
|
During a Newton-Raphson iteration, the system \eqref{SysNR} has to be generated and then solved consecutively. These steps are done by using the <code>GenerateJac</code>| and <code>SolveJac</code> functions inside the <code>resolution-op</code> field of an <code>IterativeLoop</code> command. These two functions are briefly described hereafter:
|
|
|
|
|
|
<pre>
|
|
|
GenerateJac [system-id]
|
|
|
</pre>
|
|
|
|
|
|
Generate the system of equations <code>system-id</code> (see \eqref{SysNR}) using a Jacobian matrix $\Ja$ (of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$).
|
|
|
|
|
|
<pre>
|
|
|
SolveJac [system-id]
|
|
|
</pre>
|
|
|
|
|
|
Solve the system of equations <code>system-id</code> (see \eqref{SysNR}) using a Jacobian matrix $\Ja$ (system of which the unknowns are corrections $\vec{\delta x}$ of the current solution $\vec{x}$). Then, Increment the solution ($\vec{x}=\vec{x}+\vec{\delta x}$) and compute the relative error $\vec{\delta x}/\vec{x}$.
|
|
|
|
|
|
In GetDP, the system of equations that has to be generated and solved is expressed in an "Equation"-block of a `.pro' file. When calling <code>GenerateJac</code>, the Jacobian matrix $\Ja$ is built by assembling the matrix $\mat{A}$ whith the additionnal terms incorporated in a <code>JacNL</code> function in the formulation of a nonlinear problem. In other word, by definition :
|
|
|
\begin{equation}
|
|
|
\Ja:= \mat{A} + \JacNL,
|
|
|
\label{J=A+JacNL}
|
|
|
\end{equation}
|
|
|
where $\JacNL$ stands for the elements included in a <code>JacNL</code> equation block while $\mat{A}$ gathers the classical terms that are not attached to a <code>JacNL</code> equation block. In case the problem is linear, i.e. when the $\mat{A}$-matrix does not depend on the unknowns $\vec{x}$, theJacobian matrix reduces to $\Ja = \mat{A}$ so that the $\JacNL$ part vanishes. In the light of this, it is obvious that the $\JacNL$ is used to represent the nonlinear part of the Jacobian matrix.
|
|
|
|
|
|
By supposing that all the terms in the formulation are integrated on the complete domain $\Omega$ and knowing that $\vec{x}_i=\vec{x}_{i-1}+\vec{\delta x}_i$, the system \eqref{SysNR} combined with the notations \eqref{Jac}, \eqref{J=A+JacNL} can be rewritten as :
|
|
|
\begin{equation*}
|
|
|
\Big( \mat{A}(\vec{x}_{i-1}) +
|
|
|
\JacNL (\vec{x}_{i-1}) \Big)
|
|
|
.\;\vec{\delta x}_i
|
|
|
= \vec{b}-\mat{A}(\vec{x}_{i-1})\;.\;\vec{x}_{i-1},
|
|
|
\end{equation*}
|
|
|
\begin{equation}
|
|
|
\mat{A}(\vec{x}_{i-1}).\;\vec{x}_{i} + \JacNL (\vec{x}_{i-1}) \;.\; \vec{\delta x}_i = \vec{b},
|
|
|
\label{Formu}
|
|
|
\end{equation}
|
|
|
where $\vec{x}_i$ and $\vec{\delta x}_i$ are both discrete unknown quantities while $\vec{x}_{i-1}$ is already computed from the previous iteration.
|
|
|
|
|
|
Thanks to this last rewriting of the system, it is possible to give an interpretation on the way the degrees of freedom have to be understood in an "Equation"-block of a `.pro' file. In GetDP, a <code>Dof</code> symbol in front of a discrete quantity indicates that this quantity is an unknown quantity, and should therefore not be considered as already computed. When calling <code>GenerateJac</code>, the generated unknowns are the increments $\vec{\delta x}_i$ of the <code>Dof</code> elements. However, as the reformulation of the system \eqref{Formu} highlights, the <code>Dof</code> terms that are not in a <code>JacNL</code> function ("$\mat{A}(\vec{x}_{i-1}).\; \vec{\delta x}_i$") can be grouped with the right hand side previous values ("$\mat{A}(\vec{x}_{i-1}).\; \vec{x}_{i-1}$") built by <code>GenerateJac</code>, so that the unknowns linked to these terms can be seen as classical <code>Dof</code> quantities :
|
|
|
\begin{equation*}
|
|
|
\verb|Dof{x}| \rightarrow \vec{x}_i.
|
|
|
\end{equation*}
|
|
|
On the contrary, for the elements inside a <code>JacNL</code> equation block, when using <code>GenerateJac</code>, no additional suitable previous terms are generated on the right hand side, so that a <code>Dof</code> quantity will be understood by GetDP as being the unknown increment of this quantity at the current iteration. As it is visible on the last rewriting of the system \eqref{Formu}, the $\JacNL$ operator is acting specifically on the increment $\vec{\delta x}_i$. According to this interpretation, it comes:
|
|
|
\begin{equation*}
|
|
|
\verb|JacNL[Dof{x}...]| \rightarrow \vec{\delta x}_i.
|
|
|
\end{equation*}
|
|
|
Finally, if a quantity is not distinguished by a <code>Dof</code> symbol, GetDP will use the last computed value (results of the last iteration):
|
|
|
\begin{equation*}
|
|
|
\verb|{x}| \rightarrow \vec{x}_{i-1}.
|
|
|
\end{equation*}
|
|
|
|
|
|
Thanks to these interpretations, any system in the form of system \eqref{Formu} can be naturally expressed in an "Equation"-block of a `.pro' file and be resolved by the NR-method.
|
|
|
|
|
|
At this point, only the implementation of the NR-method in GetDP has been discussed. Actually, the only difference between the NR-method and Picard's method is linked to the presence or not of the $\JacNL$ operator. If the $\JacNL$ term is removed from the Newton-Raphson formulation \eqref{Formu}, the Picard system \eqref{Picard} is retrieved. So from the implementation point of view, simply removing the \verb|JacNL| content in the "Equation"-block of a `.pro' file transforms a Newton-Raphson iteration in a Picard iteration.
|
|
|
|
|
|
The next section presents in concrete terms how to write nonlinear partial differential equations in GetDP using a practical example.
|
|
|
|
|
|
== An example ==
|
|
|
|
|
|
Let us consider the following PDE in strong form, describing a magnetodynamic problem in terms of the magnetic field $\vec{h}$, where for clarity we omitted the boundary conditions and the source terms:
|
|
|
\begin{equation} \curl \left(\frac{1}{\sigma} \curl \vec{h} \right) + \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right) = 0. \end{equation}
|
|
|
|
|
|
Using a Galerkin approach, the weak form of this PDE reads: find $\vec{h}$ in a suitable function space such that
|
|
|
\begin{equation} \left( \frac{1}{\sigma} \curl \vec{h} \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right)\;.\; \vec{h'} \right)_\Omega = 0
|
|
|
\end{equation}
|
|
|
holds for all test functions $\vec{h'}$ in the same space.
|
|
|
|
|
|
Let us analyse the nonlinear implementation when a backward Euler scheme is used for the time discretization:
|
|
|
\begin{equation} \frac{\partial}{\partial t} \left( \mu(\vec{h})\;.\;\vec{h} \right) \quad \rightarrow \quad \frac{\mu\left(\vec{h}^n\right)\;.\;\vec{h}^n - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t} ,
|
|
|
\end{equation}
|
|
|
where the upper index $n$ denotes the current time step (and $n-1$ the previous time step). In what follows the lower index $i$ will denote the current iteration number (and the index $i-1$ the previous, known one).
|
|
|
|
|
|
{| class="wikitable"
|
|
|
|-
|
|
|
|
|
|
|
|In GetDP
|
|
|
|-
|
|
|
|$\vec{h}^n_i \rightarrow$ This we are looking for
|
|
|
|$\verb|Dof{H}|$
|
|
|
|-
|
|
|
|$\vec{h}^n_{i-1} \rightarrow$ Actual time point, but value from last iteration
|
|
|
|$\verb|{H}|$
|
|
|
|-
|
|
|
|$\vec{h}^{n-1} \rightarrow$ Value from last time step
|
|
|
|$\verb|{H}[1]|$
|
|
|
|-
|
|
|
|$\Delta t \rightarrow$ Time step size
|
|
|
|$\verb|DTime|$
|
|
|
|-
|
|
|
|}
|
|
|
|
|
|
=== Picard's Method: $\mu$-version ===
|
|
|
|
|
|
For the nonlinear part ($\mu$ in our case), use $\vec{h}$ from the last iteration:
|
|
|
\begin{equation} \boxed{ \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\mu\left(\vec{h}^n_{i-1}\right)\;.\;\vec{h}^n_i - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} \right)_\Omega = 0. }\end{equation}
|
|
|
|
|
|
Formulation in GetDP:
|
|
|
|
|
|
<pre>
|
|
|
Formulation{
|
|
|
{ Name Eddy_Formulation_H_Pic; Type FemEquation ;
|
|
|
Quantity {
|
|
|
{ Name H; Type Local; NameOfSpace Hcurl_hphi; }
|
|
|
}
|
|
|
Equation {
|
|
|
Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
Galerkin { [ 1.0/\$DTime * mu_NL[{H}] * Dof{H} , {H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
Galerkin { [-1.0/\$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
|
|
|
Resolution{
|
|
|
{ Name Solution_Nonlinear_H_Pic;
|
|
|
System {
|
|
|
{ Name A; NameOfFormulation Eddy_Formulation_H_Pic;}
|
|
|
}
|
|
|
Operation {
|
|
|
InitSolution[A];
|
|
|
TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{
|
|
|
IterativeLoop[maxit, eps, relax] {
|
|
|
GenerateJac[A] ; SolveJac[A] ;
|
|
|
}
|
|
|
SaveSolution[A];
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
</pre>
|
|
|
|
|
|
=== Newton-Raphson Method: $\mu$-version ===
|
|
|
|
|
|
We want to set the following function $f$ equal zero:
|
|
|
\begin{equation}
|
|
|
f\left(\vec{h}^n\right)=\left( \frac{1}{\sigma} \curl \vec{h}^n \;.\; \curl \vec{h'} \right)_\Omega + \left( \frac{\mu\left(\vec{h}^n\right)\;.\;\vec{h}^n - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} \right)_\Omega = 0.
|
|
|
\end{equation}
|
|
|
|
|
|
Algorithm:
|
|
|
\begin{equation}
|
|
|
\vec{h}^n_i=\vec{h}^n_{i-1}-\frac{f(\vec{h}^n_{i-1})}{\frac{\partial}{\partial \; \vec{h}^n_{i-1}} f(\vec{h}^n_{i-1})}.
|
|
|
\label{Algo}
|
|
|
\end{equation}
|
|
|
|
|
|
Defining:
|
|
|
\begin{equation}
|
|
|
\delta\vec{h}=\vec{h}^n_i-\vec{h}^n_{i-1} = \text{change of } \vec{h} \text{ per iteration},
|
|
|
\label{def}
|
|
|
\end{equation}
|
|
|
equation \eqref{Algo} can be rewritten as
|
|
|
\begin{equation}
|
|
|
\boxed{ \frac{\partial}{\partial \; \vec{h}^n_{i-1}} f(\vec{h}^n_{i-1}) \;.\; \delta\vec{h} + f(\vec{h}^n_{i-1}) = 0. }
|
|
|
\label{Algo2}
|
|
|
\end{equation}
|
|
|
Note: as explained above, In GetDP <code>JacNL</code> implies that <code>Dof{H}</code> here means the change of $\vec{h}$ in the iteration:
|
|
|
\begin{equation*}
|
|
|
\delta\vec{h} \; \rightarrow \; \verb|JacNL[Dof{H}...]|. \;
|
|
|
\end{equation*}
|
|
|
|
|
|
Equation \eqref{Algo2} has to be implemented in the "Equation"-block
|
|
|
\begin{equation}
|
|
|
\begin{split}
|
|
|
f\left(\vec{h}^n_{i-1}\right) = &
|
|
|
\; \left( \frac{1}{\sigma} \curl \vec{h}^n_{i-1} \;.\; \curl \vec{h'} \right)_\Omega \\
|
|
|
&+ ( \frac{\mu\left(\vec{h}^n_{i-1}\right)\;.\;\vec{h}^n_{i-1} - \mu\left(\vec{h}^{n-1}\right)\;.\;\vec{h}^{n-1} }{\Delta t}\;.\; \vec{h'} )_\Omega = 0,
|
|
|
\end{split}
|
|
|
\label{f}
|
|
|
\end{equation}
|
|
|
|
|
|
\begin{equation}
|
|
|
\begin{split}
|
|
|
\frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} = &
|
|
|
\; \left( \frac{1}{\sigma} \curl \delta\vec{h} \;.\; \curl \vec{h'} \right)_\Omega \\
|
|
|
&+ ( \frac{1}{\Delta t} \Bigg( \mu ( \vec{h}^n_{i-1}) + \frac{\partial \mu}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\vec{h}^n_{i-1}\Bigg)\;.\;\delta\vec{h}\;.\;\vec{h'} )_\Omega = 0.
|
|
|
\end{split}
|
|
|
\label{df}
|
|
|
\end{equation}
|
|
|
|
|
|
Inserting \eqref{f} and \eqref{df} in \eqref{Algo2} and doing a small simplification with \eqref{def} yields:
|
|
|
\begin{equation}
|
|
|
\boxed{
|
|
|
\begin{split}
|
|
|
\frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} + f&\left(\vec{h}^n_{i-1}\right) =
|
|
|
\; \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega \\
|
|
|
&+ ( \frac{1}{\Delta t} \Bigg( \mu ( \vec{h}^n_{i-1})\;.\;\vec{h}^n_i - \mu(\vec{h}^{n-1})\;.\;\vec{h}^{n-1}\Bigg)\;.\;\vec{h'} )_\Omega \\
|
|
|
&+ ( \frac{1}{\Delta t} \;.\; \frac{\partial \mu}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\vec{h}^n_{i-1} \;.\; \delta\vec{h} \;.\; \vec{h'})_\Omega = 0.
|
|
|
\end{split}
|
|
|
}
|
|
|
\end{equation}
|
|
|
|
|
|
Formulation in GetDP:
|
|
|
|
|
|
<pre>
|
|
|
Formulation{
|
|
|
{ Name Eddy_Formulation_H_NR; Type FemEquation ;
|
|
|
Quantity {
|
|
|
{ Name H; Type Local; NameOfSpace Hcurl_hphi; }
|
|
|
}
|
|
|
Equation {
|
|
|
Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
Galerkin { [ 1.0/\$DTime * mu_NL[{H}] * Dof{H} , {H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
Galerkin { [-1.0/\$DTime * mu_NL[{H}[1]] * {H}[1], {H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
Galerkin { JacNL [1.0/\$DTime * dmudH[{H}] * Norm[{H}] * Dof{H}, {H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
|
|
|
Resolution{
|
|
|
{ Name Solution_Nonlinear_H_NR;
|
|
|
System {
|
|
|
{ Name A; NameOfFormulation Eddy_Formulation_H_NR;}
|
|
|
}
|
|
|
Operation {
|
|
|
InitSolution[A];
|
|
|
TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{
|
|
|
IterativeLoop[maxit, eps, relax] {
|
|
|
GenerateJac[A] ; SolveJac[A] ;
|
|
|
}
|
|
|
SaveSolution[A];
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
</pre>
|
|
|
|
|
|
=== Newton-Raphson Method: $b$-version ===
|
|
|
|
|
|
Following the same procedure with
|
|
|
\begin{equation*}
|
|
|
f\left(\vec{h}^n\right)=
|
|
|
\left( \frac{1}{\sigma} \curl \vec{h}^n \;.\; \curl \vec{h'} \right)_\Omega
|
|
|
+ \left( \frac{\B(\vec{h}^n)-\B(\vec{h}^{n-1} ) }
|
|
|
{\Delta t}\;.\; \vec{h'} \right)_\Omega = 0,
|
|
|
\end{equation*}
|
|
|
|
|
|
gives
|
|
|
\begin{equation*}
|
|
|
f\left(\vec{h}^n_{i-1}\right) =
|
|
|
\; \left( \frac{1}{\sigma} \curl \vec{h}^n_{i-1} \;.\; \curl \vec{h'} \right)_\Omega
|
|
|
+ ( \frac{\B(\vec{h}^n_{i-1})-\B(\vec{h}^{n-1}) }{\Delta t}\;.\; \vec{h'} )_\Omega = 0,
|
|
|
\end{equation*}
|
|
|
|
|
|
\begin{equation*}
|
|
|
\frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} =
|
|
|
\; \left( \frac{1}{\sigma} \curl \delta\vec{h} \;.\; \curl \vec{h'} \right)_\Omega
|
|
|
+ ( \frac{1}{\Delta t} \frac{\partial \B}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\;\delta\vec{h}\;.\;\vec{h'} )_\Omega = 0.
|
|
|
\end{equation*}
|
|
|
|
|
|
\begin{equation*}
|
|
|
\boxed{
|
|
|
\begin{split}
|
|
|
\frac{\partial}{\partial \vec{h}^n_{i-1} } f\left(\vec{h}^n_{i-1}\right)\;.\;\delta\vec{h} + f\left(\vec{h}^n_{i-1}\right) &=
|
|
|
\; \left( \frac{1}{\sigma} \curl \vec{h}^n_i \;.\; \curl \vec{h'} \right)_\Omega \\
|
|
|
&+ ( \frac{1}{\Delta t} \Bigg( \B(\vec{h}^n_{i-1} ) - \B(\vec{h}^{n-1}) \Bigg)\;.\;\vec{h'} )_\Omega \\
|
|
|
&+ ( \frac{1}{\Delta t} \;.\; \frac{\partial \B}{\partial \vec{h}}\Bigg|_{\vec{h}^n_{i-1}}\;.\; \delta\vec{h} \;.\; \vec{h'})_\Omega = 0.
|
|
|
\end{split}
|
|
|
}
|
|
|
\end{equation*}
|
|
|
|
|
|
Formulation in GetDP:
|
|
|
|
|
|
<pre>
|
|
|
Formulation{
|
|
|
{ Name Eddy_Formulation_B_NR; Type FemEquation ;
|
|
|
Quantity {
|
|
|
{ Name H; Type Local; NameOfSpace Hcurl_hphi; }
|
|
|
}
|
|
|
Equation {
|
|
|
Galerkin { [ 1/sig[] * Dof{Curl H} , {Curl H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
Galerkin { [ 1.0/\$DTime * B[{H}] , {H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
Galerkin { [-1.0/\$DTime * B[{H}[1]], {H}[1]] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
Galerkin { JacNL [1.0/\$DTime * dBdH[{H}] * Dof{H}, {H}] ;
|
|
|
In Domain ; Jacobian J ; Integration I ; }
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
|
|
|
Resolution{
|
|
|
{ Name Solution_Nonlinear_B_NR;
|
|
|
System {
|
|
|
{ Name A; NameOfFormulation Eddy_Formulation_B_NR;}
|
|
|
}
|
|
|
Operation {
|
|
|
InitSolution[A];
|
|
|
TimeLoopTheta[T_Time0,T_TimeMax,T_DTime[], T_Theta[]]{
|
|
|
IterativeLoop[maxit, eps, relax] {
|
|
|
GenerateJac[A] ; SolveJac[A] ;
|
|
|
}
|
|
|
SaveSolution[A];
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
</pre>
|
|
|
|
|
|
-->
|
|
|
|
|
|
The explicit specification in `Resolution` operations is the most flexible solution, but is also the most involved. See the [`helix.pro`](https://gitlab.onelab.info/doc/models/blob/master/Superconductors/helix.pro) high-temperature superconducting cable model for an example. `IterativeLoop` and `IterativeLoopN` hide some of the complexity, by automatically assessing the convergence by looking at the changes in the result after each iteration. If the changes (error) is below a certain limit, the result is converged and the loop ends. `IterativeLoop` uses an empirical algorithm for calculating the error whereas `IterativeLoopN` allows to specify in detail the error calculation and the allowed tolerances.
|
|
|
|
|
|
## Explicit writing of nonlinear iterations
|
|
|
|
|
|
See the
|
|
|
[`helix.pro`](https://gitlab.onelab.info/doc/models/blob/master/Superconductors/helix.pro)
|
|
|
high-temperature superconducting cable model for an example.
|
|
|
|
|
|
To be written.
|
|
|
|
|
|
## `IterativeLoop`
|
... | ... | |