Select Git revision
install.txt
install.txt 39.67 KiB
INSTALL the different projects of cm3 lab
update 06-2017
This file describes how to install a cm3 projects on your computer. For cm3's cluster (e.g. cm3011 cm3012) all packages are install and the procedure of installation is described later (see B).
There is now a special cm3apps projects that rules them all which is described herein but you can install only 1 project as before directly on the project folder.
A) The packages needed to install gmsh are (gmsh has to be install to use cm3apps).
# For Ubuntu use sudo apt-get install
- git
- libblas-dev
- libopenmpi-dev
- openmpi-bin
- liblapack-dev
- g++
- gfortran
- cmake-curses-gui
- swig
- libfltk1.3-dev
- libpng-dev
- libjpeg-dev
- petsc-dev
- slepc-dev
- optional: python3-scipy if not you might need libpython3-dev, python3-numpy, python3-pandas etc
- optional: libmumps-dev (should be already in petsc)
- all: sudo apt-get install libblas-dev libopenmpi-dev openmpi-bin liblapack-dev g++ gfortran cmake-curses-gui swig libfltk1.3-dev libpng-dev libjpeg-dev petsc-dev slepc-dev python3-scipy libpython3-dev python3-numpy python3-pandas libmumps-dev
#To use vnc
a) launch drivers and select the most recent propreitary driver
b) install vnc4server
#to use tensor flow
sudo apt-get install python3-pip
sudo pip3 install --upgrade tensorflow==2.0.0a0
# opensuse add
- freeglut-devel
- libopenssl-devel
- libcryptopp-devel
- cairo-devel
- gmp-devel
- libSM-devel
- bison (for mumps)
- flex (for mumps)
° For openSUSE the following packages are available via yast2
- make, cmake, gcc, gcc-c++, gcc-fortran python3 (+ python3_devel) and git
- fltk
- GLU (for openSUSE the package name is freeglut install freeglut_devel too)
- blas (advanced you can install GotoBLAS see instruction later. It can be install after PETSc)
for openSUSE 12.2 or higher the package is blas-devel (and not blas)
- lapack for openSUSE 12.2 or higher the package is lapack-devel (and not lapack)
- openmpi (don't forget to select openmpi_devel)
Under OpenSuse 12.3 with OpenMPI 1.6.3 there is a link that is wrong. Infact RTE searches a file in /usr/lib64/mpi/gcc/openmpi/etc. This file is in /etc so the solution consists in created a symbolic link to /etc in /usr/lib64/mpi/gcc/openmpi with the command as root:
ln -s /etc /usr/lib64/mpi/gcc/openmpi/etc
- libpng and libjpeg (with devel and libpng-compat-devel)
- PCRE (devel and tools) for swig (can be install directly by swig see later)
° CodeBlocks (optional it is just to edit the source in a friendly cross plateform IDE)
1) On opensuse go to yast
2) select software repository
3) select add
4) choose http and next
5) Give a name to the repository eg codeblocks
give the following url: http://download.opensuse.org/repositories/devel%3a/tools%3a/ide/openSUSE_11.3
be aware if you don't use opensuse 11.3 but 11.4 change the 3 in 4 at the end of url
6) Install CodeBlocks via yast2 The package CodeBlocks Contrib has to be removed (otherwise it'll use 100% of your CPU all the time even after you close the window!)
° SWIG > 2.0 (can be installed from packages repository for >= openSuSE 12.1)
1) Download sources from http://www.swig.org/download.html Be aware 2.0.4 seems not working on Lemaitre2 used instead the 2.0.2 version
2) untar the archive (by default it's install in usr/lib so perform installation in superuser and the place of extraction is not important. If you cannot be logged in root (on clusters for example) add --prefix=$HOME/swig to ./configure command it will instal locally swig in the folder swig in your home directory
3) go in the extracted folder
4) Type :
./configure (or ./configure --prefix=$HOME/swig)
if configure report a problem with pcre you have to follow the instruction provided (download pcre from http://www.pcre.org, and place the archive in the swig's configuration folder. Then type Tools/pcre-build.sh (more help type Tools/pcre-build.sh --help) then redo ./configure in swig.
make
make install
° Cmake (if version is too old (2.6.4))
1) Download the source from cmake website http://www.cmake.org/cmake/resources/software.html
2) Untar the archive
tar -xvf blabla.tar.gz
3) Create in your home (or somewhere else) a cmake directory
mkdir cmake
4) Go in the extracted folder and do
./configure --prefix=~/cmake
gmake
gmake install
5) in your .bashrc add the cmake/bin folder in your path (first like this it is this one that will be picked up)
export PATH=$HOME/cmake/bin:$PATH
6) bash; cmake --version
should be the version you just install
° PETSc (has to be installed manually)
1) Download sources : http://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html
or use the tar version of folder distpetsc
2) untar the archive where you want to install
3) edit your .bashrc file (located in your home folder) add lines :
export PETSC_DIR= <the absolute path where you install petsc>
export PETSC_ARCH= linux-gnu-c-opt (if configure petsc with --with debugging=0 otherwise it's linux-gnu-c-debug)
4) close your terminal and open it (to reload your bashrc)
5) Go to your petsc installation folder and type the following command (you have to use this configuration for hmem server):
For 3.13 on dragon1, dragon2:
./configure --with-debugging=0 --download-fblaslapack=yes --download-mumps=yes --download-scalapack=yes --download-blacs=yes --with-mpi-dir=$MPI_HOME --with-pic --with-fpic --with-shared-libraries=1 --with-clanguage=cxx --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes
For 3.10, without superlu
./configure -configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-debugging=0 --download-fblaslapack=yes --download-mumps=yes --download-scalapack=yes --download-mpich=yes --with-shared-libraries=yes --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes
In case of problem for fblaslapack or fc error, use --with-fc=gfortran
For 3.7 or 3.8 do not use --with-batch, on ceci cluster (zenobe)
./configure --with-debugging=0 --download-fblaslapack=yes --download-mumps=yes --download-scalapack=yes --download-mpich=yes --with-shared-libraries=yes --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes
For 3.4 on ceci clusters (dragon1, lemaitre2, hercules 1 and for hmem see below) (after loading all the required modules see 5c):
./config/configure.py --with-debugging=0 --with-blas-lib=$BLASDIR/lib$BLASLIB.so --with-lapack-lib=$LAPACKDIR/lib$LAPACKLIB.so --with-mpi-dir=$MPI_HOME --with-pic --with-shared-libraries=1 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries=0
for the hmem cluster blas and lapack module does not exist so use
./configure --with-debugging=0 --download-fblaslapack=1 --download-mumps=yes --download-scalapack=yes --download-blacs=yes --download-mpich=yes --with-pic --with-shared-libraries=1 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes
for nic4 cluster last one (see the export variables in the .bashrc here below, require mpich not openmpi):
./configure --with-debugging=0 --download-fblaslapack=1 --download-mumps=yes --download-scalapack=yes --download-blacs=yes --download-mpich=yes --with-pic --with-shared-libraries=1 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes
for vega open mpi, lapack does not work; use
./configure --with-debugging=0 --download-fblaslapack=1 --download-mumps=yes --download-scalapack=yes --download-blacs=yes --download-mpich=yes --with-pic --with-shared-libraries=1 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes
or if it cannot download
./configure --with-debugging=0 --with-blas-lib=/cm/shared/apps/blas/gcc/current/lib64/libblas.a --with-lapack-lib=/cm/shared/apps/lapack/gcc/64/3.5.0/liblapack.so --download-mumps=yes --download-scalapack=yes --download-blacs=yes --with-mpi-dir=/cm/shared/apps/mpich/ge/gcc/64/3.1.4 --with-pic --with-shared-libraries=0 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes
but then add in your ,bashrc the variables (see below)
put --with-debugging=1 for debug
for more petsc configure options type: ./configure/configure.py -help
On cluster (6/12 or 4/8 cpu) add --with-batch --known-mpi-shared=0
in case of error "MPI wrappers do not work, use --with-mpi-compilers=0
on hydra (there is a conmflict with metis if downloaded from petsc)
-petsc 3.8.
./configure --with-debugging=0 --with-scalapack-lib=$SCALAPACKDIR/lib$SCALAPACKLIB.so --download-mumps=yes --with-blas-lib=$BLASDIR/lib$BLASLIB.so --with-lapack-lib=$LAPACKDIR/lib$LAPACKLIB.so --with-mpi-dir=$MPI_HOME --with-pic --with-shared-libraries=1 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries=1
-petsc 3.6
./configure --with-debugging=0 --download-fblaslapack=1 --download-mumps=yes --download-scalapack=yes --download-blacs=yes --download-mpich=yes --with-pic --with-shared-libraries=1 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes
5b) On Ceci clusters (nic4, hmem, vega, dragon1, hercules and lemaitre2) the configure option create a file that you have to submit to the job manager. Toward this end type (on vega the script is different)
cp gmsh/projects/NonLinearSolver/clusterScript/ceci_petsc.sh $PETSC_DIR or on vega cp gmsh/project/NonLinearSolver/clusterScript/vega_petsc.sh $PETSC_DIR
sbatch ceci_petsc.sh (vega_petsc.sh) (or nic4_petsc.sh on nic4 easybuild only)
Then wait until the job end (the last line of the created file "petsc-install.out" has to be "Finish" the file ./reconfigure-linux-gnu-c-opt.py is created that you have to launch by typing:
./reconfigure-linux-gnu-c-opt.py
for Leap 42.2 with petsc-3.7.4
./configure --with-debugging=0 --with-blas-lapack-dir=/usr/lib64 --with-mpi-dir=/usr/lib64/mpi/gcc/openmpi --with-pic --with-shared-libraries=1 --with-clanguage=cxx --download-mumps --download-scalapack --download-parmetis --download-metis --download-ptscotch --download-superlu_dist
for petsc-3.4.3 there is a mistake in petscsys.h. You should add before the definition of #undef __FUNCT__ #define __FUNCT__ "PetscMemcpy" the following include:
#include <string.h>
5c) This is a macro that load all the required module on ceci cluster. Be aware that the macro is cluster dependant as the version of the mpackages var from one cluster to others. "module avail" will list all the available modules (including different compiler and MPI versions) so you may change this macro to fit your needs. You can put it on your .bashrc and then bash; load_module will load the required module
- dragon1:
function load_module()
{
module purge
module load GCC/7.3.0-2.30
module load CMake/3.11.4-GCCcore-7.3.0
module load OpenMPI/3.1.1-GCC-7.3.0-2.30
module load Python/2.7.15-GCCcore-7.3.0-bare
export MPI_HOME=/usr/local/Software/.local/easybuild/software/OpenMPI/3.1.1-GCC-7.3.0-2.30/
export MPI_RUN=$MPI_HOME/bin/mpirun
echo "List of loaded modules:"
module list
}
load_module
export PETSC_DIR=$HOME/local/petsc-3.13.2
export PETSC_ARCH=arch-linux-cxx-opt
export SLEPC_DIR=$HOME/local/slepc-3.13.3
export SLEPC_ARCH=linux-gnu-c-opt
export BLASDIR=$PETSC_DIR/$PETSC_ARCH/lib
export BLASLIB=fblas
export LAPACKDIR=$PETSC_DIR/$PETSC_ARCH/lib
export LAPACKLIB=flapack
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$HOME/local/swig/bin:$HOME/local/bin:$PATH
export PATH=$PATH:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh
export PYTHONPATH=$PYTHONPATH:$HOME/cm3Libraries/dG3D/release:$HOME/cm3Libraries/dG3D/debug:$HOME/cm3Libraries/dG3D/debug/NonLinearSolver/gmsh/utils/wrappers:$HOME/cm3Libraries/dG3D/debug/NonLinearSolver/gmsh/utils/wrappers/gmshpy:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh/utils/wrappers:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh/utils/wrappers/gmshpy
unset SSH_ASKPASS
- dragon 2
function load_module()
{
module purge
module load GCC/7.3.0-2.30
module load CMake/3.9.6
module load OpenMPI/3.1.1-GCC-7.3.0-2.30
module load Python/2.7.15-GCCcore-7.3.0-bare
export MPI_HOME=/opt/cecisw/arch/easybuild/2018b/software/OpenMPI/3.1.1-GCC-7.3.0-2.30
export MPI_RUN=$MPI_HOME/bin/mpirun
echo "List of loaded modules:"
module list
}
load_module
export PETSC_DIR=$HOME/local/petsc-3.13.2
export PETSC_ARCH=arch-linux-cxx-opt
export SLEPC_DIR=$HOME/local/slepc-3.13.3
export SLEPC_ARCH=arch-linux-cxx-opt
export BLASDIR=$PETSC_DIR/$PETSC_ARCH/lib
export BLASLIB=fblas
export LAPACKDIR=$PETSC_DIR/$PETSC_ARCH/lib
export LAPACKLIB=flapack
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$HOME/local/swig/bin:$PATH
export PATH=$HOME/local/bin:$PATH
export PATH=$PATH:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh
export PYTHONPATH=$PYTHONPATH:$HOME/cm3Libraries/dG3D/release:$HOME/cm3Libraries/dG3D/debug:$HOME/cm3Libraries/dG3D/debug/NonLinearSolver/gmsh/utils/wrappers:$HOME/cm3Libraries/dG3D/debug/NonLinearSolver/gmsh/utils/wrappers/gmshpy:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh/utils/wrappers:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh/utils/wrappers/gmshpy
unset SSH_ASKPASS
with dragon 1 and dragon 2, when doing ccmake .., you need to specify the locations /home/ulg/cmmm/lnoels/local/petsc-3.13.2/arch-linux-cxx-opt/lib//libfblas.a and /home/ulg/cmmm/lnoels/local/petsc-3.13.2/arch-linux-cxx-opt/lib//libflapack.a using toggle
also you need to compile without gmm fltk numpy and without metis
#load_module #do not load it if you want to use scratchcopy etc
- hmem:
function load_module()
{
module purge
module load CMake/3.3.1
module load python/2.7.3
module load GCC/4.8.2
#module load OpenMPI/1.7.3-GCC-4.8.2
#export MPI_HOME=/usr/local/Software/OpenMPI/1.7.3-GCC-4.8.2/
#export LD_PRELOAD=$LD_PRELOAD:/cvos/shared/apps/openmpi/gcc/64/1.6.2/lib64/libmpi.so
export PETSC_DIR=$HOME/local/petsc-3.6.4
export PETSC_ARCH=linux-gnu-c-opt
export SLEPC_DIR=$HOME/local/slepc-3.6.3
export SLEPC_ARCH=linux-gnu-c-opt
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$PETSC_DIR/$PETSC_ARCH/bin:$PATH
export PATH=$HOME/local/bin:$PATH
export MPI_HOME=$PETSC_DIR/$PETSC_ARCH/
export MPI_RUN=$MPI_HOME/bin/mpirun
}
-lemaitre2:
function load_module() # as initadd seems not working
{
module purge
# hack as LAPACK uses BLASDIR BLASLIB
module add lapack/gcc/3.2.1
export LAPACKDIR=$BLASDIR
export LAPACKLIB=$BLASLIB
# then load blas
module add blas/gcc/3.2.1
module add cmake/2.8.11.2
module add gcc/4.7.2
module add openmpi/1.6.5/gcc-4.7.2
module add python/2.7.3
}
- hercules:
function load_module()
{
module load GCCcore/5.4.0
module load ifort/2016.3.210-GCC-5.4.0-2.26
module list
}
in .bashrc:
export PATH=$HOME/local/cvs/bin:$HOME/local/cmake/bin:$HOME/local/swig/bin:$PATH
export PETSC_DIR=$HOME/local/petsc-3.10.5
export PETSC_ARCH=linux-gnu-c-opt
export SLEPC_DIR=$HOME/local/slepc-3.10.2
export SLEPC_ARCH=linux-gnu-c-opt
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$PETSC_DIR/$PETSC_ARCH/bin:$PATH
export MPI_HOME=$PETSC_DIR/$PETSC_ARCH/
export MPI_RUN=$MPI_HOME/bin/mpirun
export PATH=$PATH:$HOME/cm3Libraries/cm3apps/release/NonLinearSolver/gmsh
export PATH=$PATH:$HOME/cm3Libraries/dG3D/debug/NonLinearSolver/gmsh
export PATH=$PATH:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh
export PYTHONPATH=$PYTHONPATH:$HOME/cm3Libraries/dG3D/release:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh/utils/wrappers:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh/utils/wrappers/gmshpy
- vega: (this cluster load common module in the user .bashrc so I do not define a function but I add the module directly in the .bashrc):
alredy in your .bashrc
module load GCC/4.8.3
module load SWIG/3.0.2-intel-2014b-Python-2.7.8
module load Python/2.7.8-intel-2014b
module load CMake/3.0.0-intel-2014b
export PETSC_DIR=$HOME/local/petsc-3.6.4
export PETSC_ARCH=linux-gnu-c-opt
export SLEPC_DIR=$HOME/local/slepc-3.6.3
export SLEPC_ARCH=linux-gnu-c-opt
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$PETSC_DIR/$PETSC_ARCH/bin:$PATH
export MPI_HOME=$PETSC_DIR/$PETSC_ARCH/
export MPI_RUN=$MPI_HOME/bin/mpirun
export PATH=$HOME/local/bin:$PATH
export PATH=$PATH:$HOME/gmsh/projects/dG3D/release
export PATH=$PATH:$HOME/gmsh/projects/dG3D/release/NonLinearSolver/gmsh
export PYTHONPATH=$PYTHONPATH:$HOME/gmsh/projects/dG3D/release:$HOME/gmsh/projects/dG3D/release/NonLinearSolver/gmsh/utils/wrappers:$HOME/gmsh/projects/dG3D/release/NonLinearSolver/gmsh/utils/wrappers/gmshpy
!!!! USE ENABLE_CXX11=OFF; for old version :in CMakeList of dG3D and NonLinearSolver, replate std=c++11 by std=c++0x to get set(CMAKE_CXX_FLAGS " ${CMAKE_CXX_FLAGS} -DNONLOCALGMSH -std=c++0x")
!!!! if gmsh run error related to metis, compile gmsh without METIS
- nic4:
module load slurm
function load_module()
{
module purge
module load slurm
module load cmake/2.8.12.1
module load gcc/4.8.1
#module load openmpi/qlc/gcc/64/1.6.4
module load python/2.7.6
module list
}
load_module
export PETSC_DIR=$HOME/local/petsc-3.6.4
export PETSC_ARCH=linux-gnu-c-opt
export SLEPC_DIR=$HOME/local/slepc-3.6.3
export SLEPC_ARCH=linux-gnu-c-opt
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$PETSC_DIR/$PETSC_ARCH/bin:$PATH
export MPI_HOME=$PETSC_DIR/$PETSC_ARCH/
export MPI_RUN=$MPI_HOME/bin/mpirun
export PATH=$HOME/local/bin:$PATH
export PATH=$PATH:$HOME/gmsh/projects/dG3D/release
export PATH=$PATH:$HOME/gmsh/projects/dG3D/release/NonLinearSolver/gmsh
export PYTHONPATH=$PYTHONPATH:$HOME/gmsh/projects/dG3D/release:$HOME/gmsh/projects/dG3D/release/NonLinearSolver/gmsh/utils/wrappers
!!!! USE ENABLE_CXX11=OFF; for old version : in CMakeList of dG3D and NonLinearSolver, replate std=c++11 by std=c++0x to get set(CMAKE_CXX_FLAGS " ${CMAKE_CXX_FLAGS} -DNONLOCALGMSH -std=c++0x")
-hydra
function load_module()
{
module load GCC/6.4.0-2.28
module load OpenMPI/2.1.1-GCC-6.4.0-2.28
module load OpenBLAS/0.2.20-GCC-6.4.0-2.28
module load ScaLAPACK/2.0.2-gompi-2017b-OpenBLAS-0.2.20
module load CMake/3.9.1-GCCcore-6.4.0
module load METIS/5.1.0-GCCcore-6.4.0
export PETSC_DIR=$HOME/local/petsc-3.8.3
export PETSC_ARCH=linux-gnu-c-opt
export SLEPC_DIR=$HOME/local/slepc-3.8.2
export SLEPC_ARCH=linux-gnu-c-opt
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$PETSC_DIR/$PETSC_ARCH/bin:$PATH
export MPI_HOME=/apps/brussel/CO7/magnycours-ib/software/OpenMPI/2.1.1-GCC-6.4.0-2.28 #$PETSC_DIR/$PETSC_ARCH/
export MPI_RUN=$MPI_HOME/bin/mpirun
export BLASDIR=/apps/brussel/CO7/magnycours-ib/software/OpenBLAS/0.2.20-GCC-6.4.0-2.28/lib/
export BLASLIB=openblas
export LAPACKDIR=$BLASDIR #/apps/brussel/CO7/magnycours-ib/software/OpenBLAS/0.2.19-GCC-6.3.0-2.27-LAPACK-3.7.0/lib/
export LAPACKLIB=openblas
export SCALAPACK=/apps/brussel/CO7/magnycours-ib/software/ScaLAPACK/2.0.2-gompi-2017b-OpenBLAS-0.2.20/lib
export SCALAPACKLIB=scalapack
}
module purge
load_module
-zenobe
module purge
function load_module()
{
#module purge
module load cmake/3.6.1/64/gcc/4.4.7
module load python/2.7-RH6
module load compiler/gcc/4.4.7
export SCRATCH=/SCRATCH/ulg-cmmm/lnoels
export PETSC_DIR=$HOME/local/petsc-3.7.6
export PETSC_ARCH=linux-gnu-c-opt
export SLEPC_DIR=$HOME/local/slepc-3.7.4
export SLEPC_ARCH=linux-gnu-c-opt
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$PETSC_DIR/$PETSC_ARCH/bin:$PATH
export MPI_HOME=$PETSC_DIR/$PETSC_ARCH/
export MPI_RUN=$MPI_HOME/bin/mpirun
module load swig/3.0.10/64/gcc/4.4.7
}
load_module
!!!! in CMakeList of dG3D and NonLinearSolver, replate std=c++11 by std=c++0x to get set(CMAKE_CXX_FLAGS " ${CMAKE_CXX_FLAGS} -DNONLOCALGMSH -std=c++0x")
5d) Since PETSC-3.4 it seem that there is an issue when you compile Gmsh. You have error message telling that memcpy, memset,... are not defined. To solve this problem you have to add
#include<cstring>
at the first line of the file $PETSC_DIR/include/petscsys.h
6) If configuration failed ask or search... Otherwise type the command (if it is suggested by configure copy and paste it):
make PETSC_DIR=<installation folder> PETSC_ARCH=linux-gnu-c-opt all
7) Perform test
8) Go to your home folder. Create a file ".petscrc" which allow to give some options to petsc add the two lines:
-ksp_type preonly
-pc_type lu
If these options are set before test this one can failed (MPI on 2 cpu) but everything is OK (seems OK for petsc3.3)
8b) To perform // computation with a direct solver add the line in your .petscrc
-pc_factor_mat_solver_type mumps
-mat_mumps_icntl_14 1000 # increase 1000 if the memorry is not enough
-mat_mumps_icntl_13 1 to avoid nan
or -mat_mumps_icntl_23 5000 # allocate 5000MB for ech proc, increase if the memorry is not enough
9) Unfortunately (with version 3.1-p7 and openSUSE 11.3) there is a problem with library libmpi_f77.so.0 . To fix it copy the library of openmpi installation in $PETSC_DIR/$PETSC_ARCH/lib . Assume openSUSE installation type the command
sudo cp /usr/lib64/mpi/gcc/openmpi/lib64/libmpi_f77.so.0 <absolute path to petsc installation folder/linux-gnu-c-opt/lib>
If you use the alternative configuration (with mpich) this problem appears solved. If petsc is installed with mpich uninstall other versions mpich and openmpi of your computer via your software management program (yast on opensuse) and edit your .bashrc file:
cd
vi .bashrc
export PATH=$PATH:<petsc installation path>/<petsc arch>/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<petsc installation path>/<petsc arch>/lib
:wq (to quit vi)
bash (or exit to reload .bashrc in your shell)
Proceeding like this the mpi version used is the one installed by petsc and it ensures that the same version of mpi is used by gmsh
° GotoBLAS (Advanced)
1) Download the source: http://www.tacc.utexas.edu/tacc-projects/gotoblas2/downloads/
2) untar it in an installation folder
3) Edit the Makefile.rule to choose the compilation's options
be aware the option target has to be NEHALEM
4) Type the make command
5) Unfortunately there is a problem when gmsh or dgshell link librairies. To fix it the 2 following libraries have to be copy from petsc :
libfblas.a
libflapack.a
B) Get access to server
Go on http://gitlab.onelab.info/gmsh/gmsh and register
Once registered, ask Ludovic an access to cm3Librairies by e-mail
C) Install Open Cascade (optional)
apt-get install libxi-dev libxmu-dev curl (freetype2-demos libfreetype6 libfreetype6-dev if needed)
mkdir local && cd local &&mkdir occt
curl -L -o occ73.tgz "http://git.dev.opencascade.org/gitweb/?p=occt.git;a=snapshot;h=refs/tags/V7_3_0;sf=tgz"
tar xf occ73.tgz
cd occt-V7_3_0 && mkdir build && cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_MODULE_Draw=0 -DBUILD_MODULE_Visualization=0 -DBUILD_MODULE_ApplicationFramework=0 -DINSTALL_DIR=$HOME/local/occt && make -j8 && make install
add in your .bashrc export LD_LIBRARY_PATH=$HOME/local/occt/lib/:$LD_LIBRARY_PATH
add in your .bashrc export CASROOT=$HOME/local/occt
D) Install Torch
cd local
// no gpu
wget https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-1.4.0%2Bcpu.zip
unzip libtorch-shared-with-deps-1.4.0+cpu.zip
// gpu
1) clean up
sudo rm /etc/apt/sources.list.d/cuda*
sudo apt remove --autoremove nvidia-cuda-toolkit
sudo apt remove --autoremove nvidia-*
2) purge
sudo apt-get purge nvidia*
sudo apt-get autoremove
sudo apt-get autoclean
sudo rm -rf /usr/local/cuda*
3) install graphics
sudo apt update
sudo add-apt-repository ppa:graphics-drivers
sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
sudo bash -c 'echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 /" > /etc/apt/sources.list.d/cuda.list'
sudo bash -c 'echo "deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64 /" > /etc/apt/sources.list.d/cuda_learn.list'
sudo apt update
sudo apt install cuda-11-0
sudo apt install libcudnn7
add in your .bashrc
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
4) install cudnn
download cudnn from nvidia (you need to register and downloadmanualy
https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.0.4/11.0_20200923/cudnn-11.0-linux-x64-v8.0.4.30.tgz
copy in your local directory
tar xvf cudnn-11.0-linux-x64-v8.0.4.30.tgz
sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
5) download torch with gpu
wget https://download.pytorch.org/libtorch/cu110/libtorch-shared-with-deps-1.7.0%2Bcu110.zip
unzip libtorch-shared-with-deps-1.7.0%2Bcu110.zip
add export TORCHDIR=$HOME/local/libtorch in your .bashrc
use ENABLE_TORCH ON when doing ccmake ..
E) Now you can install your project(s) (dgshell and/or dG3D and/or msch), NonLinearSolver and gmsh in the same time. gmsh is created in the NonLinearSolver folder. Because in fact NonLinearSolver is also a gmsh's project that has to be included in your project(s). (To be proper NonLinearSolver should be outside projects in the gmsh folder but it is apparently not possible) (These instructions have to be followed on cluster too)
0) Modify your .bashrc
(if you have installed petsc/slecp, cf explanation with petsc installation)
export PETSC_DIR=/petsc/petsc-xxxx (depending on the version/directory)
export PETSC_ARCH=linux-gnu-c-opt
export SLEPC_DIR=/slepc/slepc-xxxx (depending on the version/directory)
export SLEPC_ARCH=linux-gnu-c-opt
export LD_LIBRARY_PATH=$PETSC_DIR/$PETSC_ARCH/lib:$LD_LIBRARY_PATH
export PATH=$PETSC_DIR/$PETSC_ARCH/bin:$PATH
(if you have downloaded mpi with petsc, cf explanation with petsc installation)
export MPI_HOME=$PETSC_DIR/$PETSC_ARCH/
export MPI_RUN=$MPI_HOME/bin/mpirun
(In case of problem if mpi is not found and if mpi was not installed with petsc)
export LD_PRELOAD=$LD_PRELOAD:/usr/lib/openmpi/lib/libmpi.so (not always required)
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/openmpi/lib (not always required)
export NLSMPIINC=/usr/lib/openmpi/include (not always required)
(pathes for your installation)
export PATH=$PATH:$HOME/cm3Libraries/cm3apps/release/NonLinearSolver/gmsh
export PATH=$PATH:$HOME/cm3Libraries/dgshell/debug/NonLinearSolver/gmsh
export PATH=$PATH:$HOME/cm3Libraries/dgshell/release/NonLinearSolver/gmsh
export PATH=$PATH:$HOME/cm3Libraries/dG3D/debug/NonLinearSolver/gmsh
export PATH=$PATH:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh
export PYTHONPATH=$PYTHONPATH:$HOME/cm3Libraries/dgshell/release:$HOME/cm3Libraries/dgshell/release/dG3D:$HOME/cm3Libraries/dgshell/debug/NonLinearSolver/gmsh/utils/wrappers:$HOME/cm3Libraries/dgshell/debug/NonLinearSolver/gmsh/utils/wrappers/gmshpy:$HOME/cm3Libraries/dgshell/release/NonLinearSolver/gmsh/utils/wrapper::$HOME/cm3Libraries/dgshell/release/NonLinearSolver/gmsh/utils/wrappers/gmshpy
export PYTHONPATH=$PYTHONPATH:$HOME/cm3Libraries/dG3D/release:$HOME/cm3Libraries/dG3D/debug:$HOME/cm3Libraries/dG3D/debug/NonLinearSolver/gmsh/utils/wrappers:$HOME/cm3Libraries/dG3D/debug/NonLinearSolver/gmsh/utils/wrappers/gmshpy:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh/utils/wrappers:$HOME/cm3Libraries/dG3D/release/NonLinearSolver/gmsh/utils/wrappers/gmshpy
(reload bashrc)
exit the terminal and loggin again (to reload your .bashrc file)
1) get gmsh and cm3Libraries in the same directory, with the name gmsh (if not the CMake files are not working)
git clone http://$USER@gitlab.onelab.info/gmsh/gmsh.git gmsh
git clone http://$USER@gitlab.onelab.info/cm3/cm3Libraries.git cm3Libraries
# if you want the data
git clone http://$USER@gitlab.onelab.info/cm3/cm3Data.git cm3Data
#if you have access to cm3MFH
git clone http://$USER@gitlab.onelab.info/cm3/cm3MFH.git cm3MFH
#if you plan to submit changes, use a branch
git checkout -b BRANCHNAME
2) go to the folder cm3Libraries/dG3D, cm3Libraries/dgshells or cm3Libraries/cm3apps
#if you have access to cm3MFH go to cm3Libraries/dG3D/src and use ./copyFiles.sh
3) create a folder for installation
mkdir release (or debug release on cluster)
4) type ccmake .. and set the option
CMAKE_BUILD_TYPE release (or debug)
ENABLE_BUILD_DYNAMICS = ON (dynamic executable using the lib to not compile everything twice)
ENABLE_BUILD_SHARED = ON (creates the Gmsh library you need)
ENABLE_PRIVATE_API=ON (to get the full gmsh code)
ENABLE_MPI = ON (for // implementation)
ENABLE_TAUCS=OFF (or install it but it's hard)
ENABLE_SLEPC=ON
ENABLE_PETSC=ON
ENABLE_MUMPS=ON
ENABLE_NUMPY=ON
ENABLE_GMM=OFF
ENABLE_WRAP_PYTHON=ON
#temporary with lapack instead of eigen
ENABLE_EIGEN=OFF
ENABLE_BLAS_LAPACK=ON
if you use cm3apps you have to put ON the projects you want to install (no projects install by default except the solver)
ENABLE_DGSHELL = OFF/OFF
ENABLE_DG3D = ON /OFF
On some ceci cluster cxX11 does not work
ENABLE_CXX11 = OFF
On dragon1/hydra some FLTK are present but not all so you have to disable it
ENABLE_FLTK = OFF
ENABLE_OCC = OFF
On some stations, if you have an error at run
ENABLE_METIs= OFF
5) press c to configure
6) press e
7) press c If the report shows some errors I made a mistake in the description of installation. Please ask or install the missing packages. if no error appears press e then g for generate
8) If you want a codeBlocks project file type
cmake .. -G "CodeBlocks - Unix Makefiles"
9) type
make -j<number of cpu> to generate gmsh and dgshell or make dgshell -j<number of cpu> to compile just dgshell
If you have an error during the compilation about mpi.h which is not found for petsc (can occur with petsc 3.2) you must define in your bashrc the variable NLSMPIINC which has to give the path of your mpi include (eg /usr/lib64/mpi/gcc/openmpi/include
) type then bash in your shell of compilation and compile again
If you have a c++11 error change CMakeList of dG3D and NonLinearSolver, replate std=c++11 by std=c++0x to get set(CMAKE_CXX_FLAGS " ${CMAKE_CXX_FLAGS} -DNONLOCALGMSH -std=c++0x")
10) If errors occur ask or fix them
On lemaitre2 there is a problem with the compilation of wrapcxxfile of gmsh (not the one of dgshell)
To compile them you have to open the following files (they are created during the compilation)
<folder where you compile>/NonLinearSolver/gmsh/utils/wrappers/gmshpy/gmshCommonPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/utils/gmshpy/gmshGeoPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/utils/gmshpy/gmshMeshPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/utils/gmshpy/gmshNumericPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/utils/gmshpy/gmshPostPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/utils/gmshpy/gmshSolverPYTHON_wrap.cxx
and add the following line at the beginning of the file
#include<stddef.h>
It seems that is a problem with the version of gcc (from internet)
<folder where you compile>/src/dgshell_wrap.cxx maybe you have to replace #include<Python.h> by #include</usr/include/python2.6/Python.h>
On Centos, there is a problem with fltk
add manually “-lfltk_gl” to
cm3Libraries/[dgshell/cm3apps/DG3D]/[release/debug]/NonLinearSolver/gmsh/CMakeFiles/gmsh.dir/link.txt
and
cm3Libraries/[dgshell/cm3apps/DG3D]/[release/debug]/NonLinearSolver/gmsh/CMakeFiles/shared.dir/link.txt
after “ccmake ..” prior to “make”
11) launch the tests: the test suite uses ctest so once the compilation is finished you can launch the test with
ctest -j <number of processor>
the other useful option for ctest are
--output-on-failure display the test results in case it fails
-I <number of the test to start (included>,<number of the test to finish (included)> to launch one or a continuous subset of test in case only a few are failing
-V verbose all the test output
12) If one or more tests fail try to fix the issues or ask for help
13) edit your .bashrc file to add folders which contain gmsh exectable
export PATH=$PATH:<path of gmsh executable>
14) In order to limit the things to put in the .bashrc a shell script in configure by CMake for you. This shell script export the pythonpath depending on the configuration of your machine and if you use python2 or python3. Therfore you may not directly launch your python file with python but use this script instead. The script is <compilation folder>/bin/cm3py (you may do an alias in your .bashrc) and you launch your python file with
<compilation folder>/bin/cm3py <python file>.py
This script has several option allowing you to start the debugger (to debug using CodeBlocks see point C) below). To debug usin ddd use
<compilation folder>/bin/cm3py --ddd
then ddd will start and you can do
r <python file>.py
to begin to debug. For parallel debugging use
<compilation folder>/bin/cm3py --mpiddd <number of processor>
To launch another debbugger (or even ddd) use
<compilation folder>/bin/cm3py --debug <debugger executable>
15) Enjoy it !
E) Debugging (explain with dgshell but works the same for other projects. Just change dgshell by dG3D or msch
° Using the codeblocks interface:
- select the target dgshell (in place of all)
- select project/properties then under build targets select dgshell
- change the Execution working directory to the directory of the application you want to debug with an ABSOLUTE path
- click OK
- select project/set programs' arguments
- give the name of your python script as argument and give python as host application
- click OK
- select Settings/Debugger
- under GDB/CDB debugger Default change Executable path: /usr/bin/gdb by
<compilation folder>/bin/cm3py
° Using ddd:
- go to the folder of your application
- type either
<compilation folder>/bin/cm3py --ddd
or
<compilation folder>/bin/cm3py --mpiddd <x>
the second command allows a parallel debugging where <x> should be replaced by the number of processor you want
F) add a test to the suite
Test can be easily added. Several example are available in dgshell/benchmark.
° create a CMakeLists.txt for your benchmarks (and add it to your project using add_subdirectory). Standard test can use the macro add_cm3pythontest(<script>.py "cmake list with the files/folder to delete once the test is finish").
° things is to check some value in your pythonfile. the TestCheck object can be useful for that:
check = TestCheck() # create an instance of the class
check.equal(reference, current, tol) # compare reference to current considering them equal as long as the relative error is less than tol. You can use other way to generate an error in order to have the test fail. Note that (at least on my machine) sys.exit(1) cannot be used as it return 0. You must use os._exit(1) instead.
° redo make to reload the CMakeLists
° launch ctest to verify your test appear into the suite. The name of the test is auto generated and is <Project_Name>/<test folder name>/<python script name>
° tp print Vec and Mat PETSc object in gdb (ddd or codeblocks debugger) one can use:
call VecView(v,0)
call MatView(M,0)
G)
if problem of open gl (vnc forwrd x11...)
With Mesa3D installed, you can use the LD_PRELOAD environment variable to preload the Mesa3D libGL.so (which will be located in some place like /usr/lib64/opengl/xorg-x11/lib/libGL.so – use your Linux distribution's package manager tools to find, where it's located; or do find /usr -iname 'libGL.so*' and choose the one, which directory does not contain nvidia) instead of the system default libGL.so.