Newer
Older
# GMSH DG solver Lua API
The solver employs the Lua programming language for its **configuration**. Lua was chosen first of all for simplicity: it is extremely lightweight and carries almost no dependencies, and this allow to keep the solver small and compact. Secondly, Lua was chosen to *deliberately* limit the possibilities of what the user can do in the configuration files. If configurations become full-fledged programs, it means that something is missing in the solver core or that the solver is being used in the wrong way. Third, Lua is *fast*: evaluating user-defined Lua functions is not much slower than native C.
This file documents the Lua API available in the solver. API has a *general* part and a *problem-specific* part.
The *general* part has to do with configuration not related to a specific problem (i.e. timestep, geometry file), whereas the *problem-specific* part configures all the parameter that make sense only on a given problem (i.e. materials and sources). This separation is reflected also in how the configuration is handled internally.
## API version
This document describes the API available on the version v0.4 of the solver.
## General interface
### Simulation variables
- `sim.name` (string): name of the simulation. Used also as the oname of the output directory.
- `sim.dt` (real): timestep duration.
- `sim.timesteps` (integer): number of timesteps to do.
- `sim.gmsh_model` (string): name of the file containing the GMSH model.
- `sim.use_gpu` (0/1): enable/disable GPU usage.
- `sim.approx_order` (integer): method approximation order.
- `sim.geom_order` (integer): geometry order. Only order 1 supported for now.
- `sim.time_integrator` (string): `"rk4"`, `"leapfrog"` or `"euler"`. Euler is only for test purporses, as it is numerically instable.
### Postprocessing general variables
- `postpro.silo_output_rate` (integer): rate at which Silo files are produced (for example 100 gives you a Silo file every 100 timesteps)
- `postpro.cycle_print_rate` (integer): rate at which simulation progress state is printed (for example 100 gives you an informational message every 100 timesteps)
### Mesh variables
- `mesh.scalefactor` (real): apply a scaling factor to the mesh.
### Parallel execution information (available only if MPI support is enabled)
The parallel solver runs as separate MPI processes. As such, each process loads and executes its own instance of the configuration script. This means that you must beware of global variables or other shared state, because modifications are **not** reflected across processes. Even if the configuration script should in general not depend on the rank of the current process, the following variables are exposed:
- `parallel.comm_rank` (integer): The MPI rank of the current solver process
- `parallel.comm_size` (integer): The MPI communicator size
### Callable functions
- `enable_boundary_sources()`: takes a boolean parameter that specifies if boundary sources should be enabled or not
- `enable_interface_sources()`: takes a boolean parameter that specifies if the sources applied on internal interfaces should be enabled or not
- `enable_volume_sources()`: takes a boolean parameter that specifies if volumetric sources should be enabled or not
#### Handling of GMSH physical groups
- `volume_physical_group_entities(pg)`: takes the tag of a volume physical group (`dim == 3` in GMSH parlance) and returns the tags of all the entities included in the physical group. This function is available only after mesh loading, therefore it is usually called inside some callback.
- `surface_physical_group_entities(pg)`: takes the tag of a surface physical group (`dim == 2` in GMSH parlance) and returns the tags of all the entities included in the physical group. This function is available only after mesh loading, therefore it is usually called inside some callback.
- `on_exit()`: this callback is called at the exit of the program, just before the internal Lua context gets destroyed.
## Maxwell solver interface
### Materials
Materials are specified populating the `materials` table exposed by the solver. The material parameters are the relative electric permittivity `epsilon`, the relative magnetic permeability `mu` and the conducibility `sigma`. The empty space parameters are stored in `const.eps0` and `const.mu0`.
It is possible either to provide a function describing the material properties in the whole domain or specify the materials subdomain by subdomain. In the first case, appropriate callbacks have to be installed in the material table, whereas in the second case the appropriate variables must be set.
If, for example, the model has two subdomains with tags `1` and `2`, the electric permeability can be specified as
```
function epsilon_callback(tag, x, y, z)
if tag == 1 then
return 1
return 2
end
materials.epsilon = epsilon_callback
```
or by setting the subdomain-specific variables as
```
materials[1] = {}
materials[1].epsilon = 1
materials[2] = {}
materials[2].epsilon = 2
```
Note that in this second case the subdomain-specific tables must be initialized with `{}` before setting any variable.
The two approaches can be mixed and the subdomain-specific variables take priority on the whole-domain function.
The solver expects that the callbacks for materials have 4 parameters:
- `tag`: the current subdomain tag
- `x`, `y` and `z`: the current coordinates at which the material needs to be evaluated. Note that currently the materials are assumed to be piecewise constant and that the material is evaluated in the barycenter of the element.
### Volumetric sources
Current sources in volumes can be specified by providing a function that evaluates the current density field in a given subdomain. For example the following code defines a volumetric source in the subdomain with tag `4`:
local Jx = 0
local Jy = 0
local Jz = math.sin(2*math.pi*freq*t)
return Jx, Jy, Jz
end
sources[4] = J_source
```
### Boundary conditions
Boundary conditions are specified for each surface with tag `tag` via the table `bndconds`. If a surface does not have an entry in `bndconds` it is assumed to be PEC.
For example, to specify an impedance boundary condition on the surface with tag `4`, one proceeds as following:
```
bndconds[4] = {}
bndconds[4].kind = "impedance"
```
There are also nonhomogeneous boundary conditions, as the plane wave condition. A plane wave described in terms of electric field is configured as
```
function pw_callback(tag, x, y, z, t)
--- Computation of the field components
return Ex, Ey, Ez
end
bndconds[4] = {}
bndconds[4].kind = "plane_wave_E"
bndconds[4].source = pw_callback
```
The available homogeneous conditions are:
- `"pec"`: Perfect Electric Conductor.
- `"pmc"`: Perfect Magnetic Conductor.
- `"impedance"`: Impedance condition, actual impedance value is computed using the material parameters of the adjacent element.
The available nonhomogeneous conditions are:
- `"E_field"`: electric field. Not implemented yet.
- `"H_field"`: magnetic field. Not implemented yet.
- `"surface_current"`: surface current.
- `"plane_wave_E"`: plane wave specified by the electric field. Normal incidence is assumed.
- `"plane_wave_H"`: plane wave specified by the magnetic field. Not implemented yet.
### Interface conditions
The interface conditions follow the exact same logic as boundary conditions, except that they are applied on internal interfaces. The table in this case is `ifaceconds`.
Currently, only a surface current condition is available:
- `"surface_current"`: surface current.
### Callable functions
None.
### Callbacks
- `before_start()`: if defined, the solver calls this function just before beginning the timestepping.
- `on_timestep(ts)`: if defined, the solver calls this function at every timestep. The solver passes the current timestep number in the parameter `ts`. The function is called **before** the solver state is advanced.
- `on_mesh_loaded()`: if defined, the solver calls this function just after loading the mesh.
- `electric_initial_condition(x, y, z)`: if defined, the solver calls this function at the beginning of the simulation in order to initialize the electric field. The parameters `x`, `y` and `z` represent the point at which the field needs to be evaluated. The function has to return a triple specifying the three field components, i.e. `return Ex, Ey, Ez`.
- `magnetic_initial_condition(x, y, z)`: if defined, the solver calls this function at the beginning of the simulation in order to initialize the magnetic field. The parameters `x`, `y` and `z` represent the point at which the field needs to be evaluated. The function has to return a triple specifying the three field components, i.e. `return Hx, Hy, Hz`.
### Data types
- `point`: a three dimensional point
### Variables
- `internal.model`: opaque reference to the internal representation of the GMSH model. Valid only after the solver is fully initialized, i.e. inside the callback functions.
- `internal.state`: opaque reference to the internal representation of the solver state. Valid only after the solver is fully initialized, i.e. inside the callback functions.
### Debug and validation
The solver has some facilities to allow validation and comparison with analytical solutions. Those facilities are accessed via the `debug` table. In the current implementation the `debug` table is undefined by default, so it must be initialized explicitly in the configuration with a statement `debug = {}`. Possible members of the `debug` table are:
- `analytical_solution`: a function `f(tag, x, y, z, t)` defining the analytical solution of the problem. If defined, it can be used to compare the numerical solution with the analytical solution during the timestepping. The function has to return 6 values, namely `Ex`, `Ey`, `Ez`, `Hx`, `Hy` and `Hz`.
- `dump_cell_ranks` (bool): if true, and if the solver has MPI support compiled in, add to the Silo output a variable named `cell_ranks` showing the cell-to-rank mapping.
The analytical solution is compared with the current numerical solution via the `compute_error()` function (available only on CPU and CPU/MPI for now). For example (see previous section for the description of `on_timestep`)
```
function on_timestep(ts)
e = compute_error()
end
```
computes the error at each timestep and places it in the variable `e`. This variable provides 6 members named `Ex`, `Ey`, `Ez`, `Hx`, `Hy` and `Hz` so, for example, `e.Ex` gives access to the error of the `x` component of the electric field.
### Postprocessing
The export of the fields E, H and J to Silo is controlled using the appropriate sections in the `postpro` table.
- `postpro["E"].silo_mode` controls the export of the electric field E. Possible values are `"none"` (no export), `"nodal"` (export as a Silo nodal variable) and `"zonal"` (export as a Silo zonal variable)
- `postpro["H"].silo_mode` controls the export of the magnetic field H. Possible values are `"none"` (no export), `"nodal"` (export as a Silo nodal variable) and `"zonal"` (export as a Silo zonal variable)
- `postpro["J"].silo_mode` controls the export of the current density field J. Possible values are `"none"` (no export), `"nodal"` (export as a Silo nodal variable) and `"zonal"` (export as a Silo zonal variable)
Nodal variables represent the field at the mesh nodes. The nodal value is reconstructed by taking the average of the solution evaluated at the node in all the elements adjacent to that node.
Zonal variables represent the field as a piecewise-constant field evaluated in the barycenter of the element.
Default mode is `"nodal"`.
- `compute_energy()`: this functions provides the energy stored in the six fields at the current time (available only on CPU and CPU/MPI for now). For example (see previous section for the description of `on_timestep`)
```
function on_timestep(ts)
e = compute_energy()
end
```
computes the energy at each timestep and places it in the variable `e`. This variable provides 6 members named `Ex`, `Ey`, `Ez`, `Hx`, `Hy` and `Hz` so, for example, `e.Ex` gives access to the error of the `x` component of the electric field.
### Considerations on the evaluation of sources and the computation on GPU
Currently sources for the timestep `t+1` are evaluated asynchronously on the CPU while the GPU is computing the timestep `t`. The sources at `t+1` are then uploaded asynchronously from the CPU to the GPU. This usually works relatively well, especially on big domains with high approximation order, however on small domains at low approximation order you can notice slowdowns, especially in the GPU code. For this reason, if the sources do not need to be evaluated along the whole simulation, it is suggested to turn them off at the appropriate timestep by using the `on_timestep()` callback and the appropriate functions described above.