DL_MG
DL_MG Documentation

DL_MG - a hybrid parallel (MPI+OpenMP), high order finite difference multigrid solver for the Poisson Boltzmann Equation on 3D cuboid domains

Version
3.0.0 (13/06/2021)
Authors
Lucian Anton, James Womack, Jacek Dziedzic and others

Overview

DL_MG solves the Poisson-Boltzmann Equation defined by the following general expression

\[ \nabla[\epsilon(\vec r)\nabla\phi(\vec r)] = \alpha \rho(\vec r) + \lambda \sum_i c_i q_i \exp[-\beta (q_i \phi(\vec r) + V(\vec r))] \ , \]

in 3D over a cuboid domain with periodic, Dirichlet and mixed boundary conditions. In the above equation \(\epsilon(\vec r)\) is the relative permittivity, \(\phi(\vec r)\) is the electric potential, \(\rho(\vec r)\) is the charge density, \( q_i\) and \(c_i\) are the electric charge and the average bulk concentration ( \( N_i/V \)) for the ion type \( i\) of the electrolyte respectively, \(\beta=1/k_BT\) is the inverse temperature, \(V(\vec r)\) is the steric potential which accounts for the short range repulsion effect between electrolyte ions and solute, \(\alpha\) and \(\lambda\) are constants which depend on the used units systems.

Specialised algorithms are used to solve the Poisson Equation

\[ \nabla[\epsilon(\vec r)\nabla\phi(\vec r)] = \alpha \rho(\vec r) \ , \]

and for the linearised Poisson Boltzmann Equation,

\[ \nabla[\epsilon(\vec r)\nabla\phi(\vec r)] +\lambda \beta \sum_i c_i q_{i}^2 \exp[-\beta V(\vec r)]\phi(\vec r)= \alpha \rho(\vec r) \ . \]

More information on the algorithms used in the solver is available in Implementation Details.

Build

The source code is available at dlmg.org .

The easiest way to use DL_MG is to build it as a library and link it to your application. A few make variable must be defined in a platforms/<name>.inc file. For guidance the user is advised to inspect the files platforms/archer.inc, which is for ARCHER system that offers several compilers via the module environment, or platforms/parallel_laptop.inc, platforms/serial_gnu.inc which were used on workstations.

The following make variables control the build:

  • FC : fortran compile command
  • USE_OPENMP : 'yes' enables OpenMP, 'no' disables it
  • USE_MPI : if defined enables the MPI build
  • BUILD : must be set to opt or debug.

Usage

The calling application can access DL_MG public subroutines and constants with use dl_mg statement.

Solver initialisation is done with dl_mg_init. The sought solution is computed by calling the generic interface dl_mg_solver which selects the Poisson or the Poisson-Boltzmann solver according to the input parameters.

If the application needs to solve another problem (i.e. new grid sizes or boundary conditions) the internal data structures must be clean with dl_mg_free before calling dl_mg_init.

A set of public constants are available, the error codes are described in Error Codes.

The error codes message can be obtain from the error code with dl_mg_error_string.

DL_MG version can be obtained with dl_mg_version.

Note on embeded grid data

The input arrays do not need to provide any halo space. However if in the calling routine the problem's grid is embeded in a larger array the calling routine must pass to the solver only the array section which contains the problem domain grid data using modern Fortran array section synthax, e. g., if potential array pot contains grid data starting from indices sx, sy, sz the solver must be called as follow

call dl_mg_solver(..., pot(sx:,sy:,sz:), ...) .

There is no need to provide the endpoints because they are derived from the initialisation data passed to dl_mg::dl_mg_init().

Restrictions and Limitations

  • The multigrid is efficient only if the grid sizes comply with the constrains described in Grid Size Constrains.
  • This code is not thread safe! It is design to use threads but it must be called outside of OpenMP regions.
  • Single precision is not implemented.
  • Each MPI rank used by the solver must be associated with a local grid that contains a non-zero number of global grid inner points, i.e. not only boundary values in the case the Dirichlet boundary condition.

Known Problems

Executables generated by Intel compiler (versions 16.x-17.x) with debug flags crash if run with more than one OpenMP threads.