Skip to content

How to compile ABINIT

This tutorial explains how to compile ABINIT including the external dependencies without relying on pre-compiled libraries, package managers and root privileges. You will learn how to use the standard configure and make Linux tools to build and install your own software stack including the MPI library and the associated mpif90 and mpicc wrappers required to compile MPI applications.

It is assumed that you already have a standard Unix-like installation that provides the basic tools needed to build software from source (Fortran/C compilers and make). The changes required for MacOsX are briefly mentioned when needed. Windows users should install cygwin that provides a POSIX-compatible environment or, alternatively, use a Windows Subsystem for Linux. Note that the procedure described in this tutorial has been tested with Linux/MacOsX hence feedback and suggestions from Windows users are welcome.


In the last part of the tutorial, we discuss more advanced topics such as using modules in supercomputing centers, compiling and linking with the intel compilers and the MKL library as well as OpenMP threads. You may want to jump directly to this section if you are already familiar with software compilation.

In the following, we will make extensive use of the bash shell hence familiarity with the terminal is assumed. For a quick introduction to the command line, please consult this Ubuntu tutorial. If this is the first time you use the configure && make approach to build software, we strongly recommended to read this guide before proceeding with the next steps. If, on the other hand, you are not interested in compiling all the components from source, you may want to consider the following alternatives:

  • Compilation with external libraries provided by apt-based Linux distributions (e.g. Ubuntu). More info available here.

  • Compilation with external libraries on Fedora/RHEL/CentOS Linux distributions. More info available here.

  • Homebrew bottles or macports for MacOsX. More info available here.

  • Automatic compilation and generation of modules on clusters with EasyBuild. More info available here.

  • Compiling Abinit using the internal fallbacks and the script automatically generated by configure if the mandatory dependencies are not found.

  • Using precompiled binaries provided by conda-forge (for Linux and MacOsX users).

Before starting, it is also worth reading this document prepared by Marc Torrent that introduces important concepts and provides a detailed description of the configuration options supported by the ABINIT build system. Note that these slides have been written for Abinit v8 hence some examples should be changed in order to be compatible with the build system of version 9, yet the document represents a valuable source of information.


The aim of this tutorial is to teach you how to compile code from source but we cannot guarantee that these recipes will work out of the box on every possible architecture. We will do our best to explain how to setup your environment and how to avoid the typical pitfalls but we cannot cover all the possible cases.

Fortunately, the internet provides lots of resources. Search engines and stackoverflow are your best friends and in some cases one can find the solution by just copying the error message in the search bar. For more complicated issues, you can ask for help on the Abinit forum or contact the sysadmin of your cluster but remember to provide enough information about your system and the problem you are encountering.

Getting started

Since ABINIT is written in Fortran, we need a recent Fortran compiler that supports the F2003 specifications as well as a C compiler. At the time of writing (September 30, 2020 ), the C++ compiler is optional and required only for advanced features that are not treated in this tutorial.

In what follows, we will be focusing on the GNU toolchain i.e. gcc for C and gfortran for Fortran. These “sequential” compilers are adequate if you don’t need to compile parallel MPI applications. The compilation of MPI code, indeed, requires the installation of additional libraries and specialized wrappers (mpicc, mpif90 or mpiifort ) replacing the “sequential” compilers. This very important scenario is covered in more detail in the next sections. For the time being, we mainly focus on the compilation of sequential applications/libraries.

First of all, let’s make sure the gfortran compiler is installed on your machine by issuing in the terminal the following command:

which gfortran


The which command, returns the absolute path of the executable. This Unix tool is extremely useful to pinpoint possible problems and we will use it a lot in the rest of this tutorial.

In our case, we are lucky that the Fortran compiler is already installed in /usr/bin and we can immediately use it to build our software stack. If gfortran is not installed, you may want to use the package manager provided by your Linux distribution to install it. On Ubuntu, for instance, use:

sudo apt-get install gfortran

To get the version of the compiler, use the --version option:

gfortran --version
GNU Fortran (GCC) 5.3.1 20160406 (Red Hat 5.3.1-6)
Copyright (C) 2015 Free Software Foundation, Inc.

Starting with version 9, ABINIT requires gfortran >= v5.4. Consult the release notes to check whether your gfortran version is supported by the latest ABINIT releases.

Now let’s check whether make is already installed using:

which make

Hopefully, the C compiler gcc is already installed on your machine.

which gcc

At this point, we have all the basic building blocks needed to compile ABINIT from source and we can proceed with the next steps.


Life gets hard if you are a Mac-OsX user as Apple does not officially support Fortran (😞) so you need to install gfortran and gcc either via homebrew or macport. Alternatively, one can install gfortran using one of the standalone DMG installers provided by the gfortran-for-macOS project. Note also that MaxOsX users will need to install make via Xcode. More info can be found in this page.

How to compile BLAS and LAPACK

BLAS and LAPACK represent the workhorse of many scientific codes and an optimized implementation is crucial for achieving good performance. In principle this step can be skipped as any decent Linux distribution already provides pre-compiled versions but, as already mentioned in the introduction, we are geeks and we prefer to compile everything from source. Moreover the compilation of BLAS/LAPACK represents an excellent exercise that gives us the opportunity to discuss some basic concepts that will reveal very useful in the other parts of this tutorial.

First of all, let’s create a new directory inside your $HOME (let’s call it local) using the command:

cd $HOME && mkdir local


$HOME is a standard shell variable that stores the absolute path to your home directory. Use:

echo My home directory is $HOME

to print the value of the variable.

The && syntax is used to chain commands together, such that the next command is executed if and only if the preceding command exited without errors (or, more accurately, exits with a return code of 0). We will use this trick a lot in the other examples to reduce the number of lines we have to type in the terminal so that one can easily cut and paste the examples in the terminal.

Now create the src subdirectory inside $HOME/local with:

cd $HOME/local && mkdir src && cd src

The src directory will be used to store the packages with the source files and compile code, whereas executables and libraries will be installed in $HOME/local/bin and $HOME/local/lib, respectively. We use $HOME/local because we are working as normal users and we cannot install software in /usr/local where root privileges are required and a sudo make install would be needed. Moreover, working inside $HOME/local allows us to keep our software stack well separated from the libraries installed by our Linux distribution so that we can easily test new libraries and/or different versions without affecting the software stack installed by our distribution.

Now download the tarball from the openblas website with:


If wget is not available, use curl with the -o option to specify the name of the output file as in:

curl -L -o v0.3.7.tar.gz


To get the URL associated to a HTML link inside the browser, hover the mouse pointer over the link, press the right mouse button and then select Copy Link Address to copy the link to the system clipboard. Then paste the text in the terminal by selecting the Copy action in the menu activated by clicking on the right button. Alternatively, one can press the central button (mouse wheel) or use CMD + V on MacOsX. This trick is quite handy to fetch tarballs directly from the terminal.

Uncompress the tarball with:

tar -xvf v0.3.7.tar.gz

then cd to the directory with:

cd OpenBLAS-0.3.7

and execute


to build the single thread version.


By default, openblas activates threads (see FAQ page) but in our case we prefer to use the sequential version as Abinit is mainly optimized for MPI. The -j2 option tells make to use 2 processes to build the code in order to speed up the compilation. Adjust this value according to the number of physical cores available on your machine.

At the end of the compilation, you should get the following output (note Single threaded):


  OS               ... Linux
  Architecture     ... x86_64
  BINARY           ... 64bit
  C compiler       ... GCC  (command line : cc)
  Fortran compiler ... GFORTRAN  (command line : gfortran)
  Library Name     ... libopenblas_haswell-r0.3.7.a (Single threaded)

To install the library, you can run "make PREFIX=/path/to/your/installation install".

You may have noticed that, in this particular case, make is not just building the library but is also running unit tests to validate the build. This means that if make completes successfully, we can be confident that the build is OK and we can proceed with the installation. Other packages use a more standard approach and provide a make check option that should be executed after make in order to run the test suite before installing the package.

To install openblas in $HOME/local, issue:

make PREFIX=$HOME/local/ install

At this point, we should have the following include files installed in $HOME/local/include:

ls $HOME/local/include/
cblas.h  f77blas.h  lapacke.h  lapacke_config.h  lapacke_mangling.h  lapacke_utils.h  openblas_config.h

and the following libraries installed in $HOME/local/lib:

ls $HOME/local/lib/libopenblas*

/home/gmatteo/local/lib/libopenblas.a     /home/gmatteo/local/lib/libopenblas_haswell-r0.3.7.a
/home/gmatteo/local/lib/    /home/gmatteo/local/lib/

Files ending with .so are shared libraries (.so stands for shared object) whereas .a files are static libraries. When compiling source code that relies on external libraries, the name of the library (without the lib prefix and the file extension) as well as the directory where the library is located must be passed to the linker.

The name of the library is usually specified with the -l option while the directory is given by -L. According to these simple rules, in order to compile source code that uses BLAS/LAPACK routines, one should use the following option:

-L$HOME/local/lib -lopenblas

We will use a similar syntax to help the ABINIT configure script locate the external linear algebra library.


You may have noticed that we haven’t specified the file extension in the library name. If both static and shared libraries are found, the linker gives preference to linking with the shared library unless the -static option is used. Dynamic is the default behaviour on several Linux distributions so we assume dynamic linking in what follows.

If you are compiling C or Fortran code that requires include files with the declaration of prototypes and the definition of named constants, you will need to specify the location of the include files via the -I option. In this case, the previous options should be augmented by:

-L$HOME/local/lib -lopenblas -I$HOME/local/include

This approach is quite common for C code where .h files must be included to compile properly. It is less common for modern Fortran code in which include files are usually replaced by .mod files i.e. Fortran modules produced by the compiler whose location is usually specified via the -J option. Still, the -I option for include files is valuable also when compiling Fortran applications as libraries such as FFTW and MKL rely on (Fortran) include files whose location should be passed to the compiler via -I instead of -J, see also the official gfortran documentation.

Do not worry if this rather technical point is not clear to you. Any external library has its own requirements and peculiarities and the ABINIT build system provides several options to automate the detection of external dependencies and the final linkage. The most important thing is that you are now aware that the compilation of ABINIT requires the correct specification of -L, -l for libraries, -I for include files, and -J for Fortran modules. We will elaborate more on this topic when we discuss the configuration options supported by the ABINIT build system.

Since we have installed the package in a non-standard directory ($HOME/local), we need to update two important shell variables: $PATH and $LD_LIBRARY_PATH. If this is the first time you hear about $PATH and $LD_LIBRARY_PATH, please take some time to learn about the meaning of these environment variables. More information about $PATH is available here. See this page for $LD_LIBRARY_PATH.

Add these two lines at the end of your $HOME/.bash_profile file

export PATH=$HOME/local/bin:$PATH


then execute:

source $HOME/.bash_profile

to activate these changes without having to start a new terminal session. Now use:

echo $PATH

to print the value of these variables. On my Linux box, I get:

echo $PATH


Note how /home/gmatteo/local/bin has been prepended to the previous value of $PATH. From now on, we can invoke any executable located in $HOME/local/bin by just typing its base name in the shell without having to the enter the full path.



export PATH=$HOME/local/bin

is not a very good idea as the shell will stop working. Can you explain why?


MaxOsx users should replace LD_LIBRARY_PATH with DYLD_LIBRARY_PATH

Remember also that one can use env to print all the environment variables defined in your session and pipe the results to other Unix tools. Try e.g.:

env | grep LD_

to print only the variables whose name starts with LD_

We conclude this section with another tip. From time to time, some compilers complain or do not display important messages because language support is improperly configured on your computer. Should this happen, we recommend to export the two variables:

export LANG=C
export LC_ALL=C

This will reset the language support to its most basic defaults and will make sure that you get all messages from the compilers in English.

How to compile libxc

At this point, it should not be so difficult to compile and install libxc, a library that provides many useful XC functionals (PBE, meta-GGA, hybrid functionals, etc). Libxc is written in C and can be built using the standard configure && make approach. No external dependency is needed, except for basic C libraries that are available on any decent Linux distribution.

Let’s start by fetching the tarball from the internet:

# Get the tarball.
# Note the -O option used in wget to specify the name of the output file

cd $HOME/local/src
wget -O libxc.tar.gz
tar -zxvf libxc.tar.gz

Now configure the package with the standard --prefix option to specify the location where all the libraries, executables, include files, Fortran modules, man pages, etc. will be installed when we execute make install (the default is /usr/local)

cd libxc-4.3.4 && ./configure --prefix=$HOME/local

Finally, build the library, run the tests and install it with:

make -j2
make check && make install

At this point, we should have the following include files in $HOME/local/include

[gmatteo@bob libxc-4.3.4]$ ls ~/local/include/*xc*
/home/gmatteo/local/include/libxc_funcs_m.mod  /home/gmatteo/local/include/xc_f90_types_m.mod
/home/gmatteo/local/include/xc.h               /home/gmatteo/local/include/xc_funcs.h
/home/gmatteo/local/include/xc_f03_lib_m.mod   /home/gmatteo/local/include/xc_funcs_removed.h
/home/gmatteo/local/include/xc_f90_lib_m.mod   /home/gmatteo/local/include/xc_version.h

where .mod are Fortran modules generated by the compiler that are needed when compiling Fortran source using the libxc Fortran API.


The .mod files are compiler- and version-dependent. In other words, one cannot use these .mod files to compile code with a different Fortran compiler. Moreover, you should not expect to be able to use modules compiled with a different version of the same compiler, especially if the major version has changed. This is one of the reasons why the version of the Fortran compiler employed to build our software stack is very important.

Finally, we have the following static libraries installed in ~/local/lib

ls ~/local/lib/libxc*
/home/gmatteo/local/lib/libxc.a   /home/gmatteo/local/lib/libxcf03.a   /home/gmatteo/local/lib/libxcf90.a
/home/gmatteo/local/lib/  /home/gmatteo/local/lib/  /home/gmatteo/local/lib/


  • libxc is the C library
  • libxcf90 is the library with the F90 API
  • libxcf03 is the library with the F2003 API

Both libxcf90 and libxcf03 depend on the C library where most of the work is done. At present, ABINIT requires the F90 API only so we should use

-L$HOME/local/lib -lxcf90 -lxc

for the libraries and


for the include files.

Note how libxcf90 comes before the C library libxc. This is done on purpose as libxcf90 depends on libxc (the Fortran API calls the C implementation). Inverting the order of the libraries will likely trigger errors (undefined references) in the last step of the compilation when the linker tries to build the final application.

Things become even more complicated when we have to build applications using many different interdependent libraries as the order of the libraries passed to the linker is of crucial importance. Fortunately the ABINIT build system is aware of this problem and all the dependencies (BLAS, LAPACK, FFT, LIBXC, MPI, etc) will be automatically put in the right order so you don’t have to worry about this point although it is worth knowing about it.

Compiling and installing FFTW

FFTW is a C library for computing the Fast Fourier transform in one or more dimensions. ABINIT already provides an internal implementation of the FFT algorithm implemented in Fortran hence FFTW is considered an optional dependency. Nevertheless, we do not recommend the internal implementation if you really care about performance. The reason is that FFTW (or, even better, the DFTI library provided by intel MKL) is usually much faster than the internal version.


FFTW is very easy to install on Linux machines once you have gcc and gfortran. The fftalg variable defines the implementation to be used and 312 corresponds to the FFTW implementation. The default value of fftalg is automatically set by the configure script via pre-preprocessing options. In other words, if you activate support for FFTW (DFTI) at configure time, ABINIT will use fftalg 312 (512) as default.

The FFTW source code can be downloaded from, and the tarball of the latest version is available at

cd $HOME/local/src

tar -zxvf fftw-3.3.8.tar.gz && cd fftw-3.3.8

The compilation procedure is very similar to the one already used for the libxc package. Note, however, that ABINIT needs both the single-precision and the double-precision version. This means that we need to configure, build and install the package twice.

To build the single precision version, use:

./configure --prefix=$HOME/local --enable-single
make -j2
make check && make install

During the configuration step, make sure that configure finds the Fortran compiler because ABINIT needs the Fortran interface.

checking for gfortran... gfortran
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether gfortran accepts -g... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... no
checking whether to build static libraries... yes

Let’s have a look at the libraries we’ve just installed:

ls $HOME/local/lib/libfftw3*
/home/gmatteo/local/lib/libfftw3f.a  /home/gmatteo/local/lib/

the f at the end stands for float (C jargon for single precision). Note that only static libraries have been built. To build shared libraries, one should use --enable-shared when configuring.

Now we configure for the double precision version (this is the default behaviour so no extra option is needed)

./configure --prefix=$HOME/local
make -j2
make check && make install

After this step, you should have two libraries with the single and the double precision API:

ls $HOME/local/lib/libfftw3*
/home/gmatteo/local/lib/libfftw3.a   /home/gmatteo/local/lib/libfftw3f.a
/home/gmatteo/local/lib/  /home/gmatteo/local/lib/

To compile ABINIT with FFTW3 support, one should use:

-L$HOME/local/lib -lfftw3f -lfftw3 -I$HOME/local/include

Note that, unlike in libxc, here we don’t have to specify different libraries for Fortran and C as FFTW3 bundles both the C and the Fortran API in the same library. The Fortran interface is included by default provided the FFTW3 configure script can find a Fortran compiler. In our case, we know that our FFTW3 library supports Fortran as gfortran was found by configure but this may not be true if you are using a precompiled library installed via your package manager.

To make sure we have the Fortran API, use the nm tool to get the list of symbols in the library and then use grep to search for the Fortran API. For instance we can check whether our library contains the Fortran routine for multiple single-precision FFTs (sfftw_plan_many_dft) and the version for multiple double-precision FFTs (dfftw_plan_many_dft)

[gmatteo@bob fftw-3.3.8]$ nm $HOME/local/lib/libfftw3f.a | grep sfftw_plan_many_dft
0000000000000400 T sfftw_plan_many_dft_
0000000000003570 T sfftw_plan_many_dft__
0000000000001a90 T sfftw_plan_many_dft_c2r_
0000000000004c00 T sfftw_plan_many_dft_c2r__
0000000000000f60 T sfftw_plan_many_dft_r2c_
00000000000040d0 T sfftw_plan_many_dft_r2c__

[gmatteo@bob fftw-3.3.8]$ nm $HOME/local/lib/libfftw3.a | grep dfftw_plan_many_dft
0000000000000400 T dfftw_plan_many_dft_
0000000000003570 T dfftw_plan_many_dft__
0000000000001a90 T dfftw_plan_many_dft_c2r_
0000000000004c00 T dfftw_plan_many_dft_c2r__
0000000000000f60 T dfftw_plan_many_dft_r2c_
00000000000040d0 T dfftw_plan_many_dft_r2c__

If you are using a FFTW3 library without Fortran support, the ABINIT configure script will complain that the library cannot be called from Fortran and you will need to dig into config.log to understand what’s going on.


At present, there is no need to compile FFTW with MPI support because ABINIT implements its own version of the MPI-FFT algorithm based on the sequential FFTW version. The MPI algorithm implemented in ABINIT is optimized for plane-waves codes as it supports zero-padding and composite transforms for the applications of the local part of the KS potential.

Also, do not use MKL with FFTW3 for the FFT as the MKL library exports the same symbols as FFTW. This means that the linker will receive multiple definitions for the same procedure and the behaviour is undefined! Use either MKL or FFTW3 with e.g. openblas.

Installing MPI

In this section, we discuss how to compile and install the MPI library. This step is required if you want to run ABINIT with multiple processes and/or you need to compile MPI-based libraries such as PBLAS/Scalapack or the HDF5 library with support for parallel IO.

It is worth stressing that the MPI installation provides two scripts (mpif90 and mpicc) that act as a sort of wrapper around the sequential Fortran and the C compilers, respectively. These scripts must be used to compile parallel software using MPI instead of the “sequential” gfortran and gcc. The MPI library also provides launcher scripts installed in the bin directory (mpirun or mpiexec) that must be used to execute an MPI application EXEC with NUM_PROCS MPI processes with the syntax:



Keep in mind that there are several MPI implementations available around (openmpi, mpich, intel mpi, etc) and you must choose one implementation and stick to it when building your software stack. In other words, all the libraries and executables requiring MPI must be compiled, linked and executed with the same MPI library.

Don’t try to link a library compiled with e.g. mpich if you are building the code with the mpif90 wrapper provided by e.g. openmpi. By the same token, don’t try to run executables compiled with e.g. intel mpi with the mpirun launcher provided by openmpi unless you are looking for troubles! Again, the which command is quite handy to pinpoint possible problems especially if there are multiple installations of MPI in your $PATH (not a very good idea!).

In this tutorial, we employ the mpich implementation that can be downloaded from this webpage. In the terminal, issue:

cd $HOME/local/src
tar -zxvf mpich-3.3.2.tar.gz
cd mpich-3.3.2/

to download and uncompress the tarball. Then configure/compile/test/install the library with:

./configure --prefix=$HOME/local
make -j2
make check && make install

Once the installation is completed, you should obtain this message (possibly not the last message, you might have to look for it).

Libraries have been installed in:

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the '-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the 'LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the 'LD_RUN_PATH' environment variable
     during linking
   - use the '-Wl,-rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to '/etc/'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and manual pages.

The reason why we should add $HOME/local/lib to $LD_LIBRARY_PATH now should be clear to you.

Let’s have a look at the MPI executables we have just installed in $HOME/local/bin:

ls $HOME/local/bin/mpi*
/home/gmatteo/local/bin/mpic++        /home/gmatteo/local/bin/mpiexec        /home/gmatteo/local/bin/mpifort
/home/gmatteo/local/bin/mpicc         /home/gmatteo/local/bin/mpiexec.hydra  /home/gmatteo/local/bin/mpirun
/home/gmatteo/local/bin/mpichversion  /home/gmatteo/local/bin/mpif77         /home/gmatteo/local/bin/mpivars
/home/gmatteo/local/bin/mpicxx        /home/gmatteo/local/bin/mpif90

Since we added $HOME/local/bin to $PATH, we should see that mpi90 is actually pointing to the version we have just installed:

which mpif90

As already mentioned, mpif90 is a wrapper around the sequential Fortran compiler. To show the Fortran compiler invoked by mpif90, use:

mpif90 -v

mpifort for MPICH version 3.3.2
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl= --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --disable-libgcj --with-isl --enable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 5.3.1 20160406 (Red Hat 5.3.1-6) (GCC)

The C include files (.h) and the Fortran modules (.mod) have been installed in $HOME/local/include

ls $HOME/local/include/mpi*

/home/gmatteo/local/include/mpi.h              /home/gmatteo/local/include/mpicxx.h
/home/gmatteo/local/include/mpi.mod            /home/gmatteo/local/include/mpif.h
/home/gmatteo/local/include/mpi_base.mod       /home/gmatteo/local/include/mpio.h
/home/gmatteo/local/include/mpi_constants.mod  /home/gmatteo/local/include/mpiof.h

In principle, the location of the directory must be passed to the Fortran compiler either with the -J (mpi.mod module for MPI2+) or the -I option (mpif.h include file for MPI1). Fortunately, the ABINIT build system can automatically detect your MPI installation and set all the compilation options automatically if you provide the installation root ($HOME/local).

Installing HDF5 and netcdf4

Abinit developers are trying to move away from Fortran binary files as this format is not portable and difficult to read from high-level languages such as python. For this reason, in Abinit v9, HDF5 and netcdf4 have become hard-requirements. This means that the configure script will abort if these libraries are not found. In this section, we explain how to build HDF5 and netcdf4 from source including support for parallel IO.

Netcdf4 is built on top of HDF5 and consists of two different layers:

  • The low-level C library

  • The Fortran bindings i.e. Fortran routines calling the low-level C implementation. This is the high-level API used by ABINIT to perform all the IO operations on netcdf files.

To build the libraries required by ABINIT, we will compile the three different layers in a bottom-up fashion starting from the HDF5 package (HDF5 → netcdf-c → netcdf-fortran). Since we want to activate support for parallel IO, we need to compile the libraries using the wrappers provided by our MPI installation instead of using gcc or gfortran directly.

Let’s start by downloading the HDF5 tarball from this download page. Uncompress the archive with tar as usual, then configure the package with:

./configure --prefix=$HOME/local/ \
            CC=$HOME/local/bin/mpicc --enable-parallel --enable-shared

where we’ve used the CC variable to specify the C compiler. This step is crucial in order to activate support for parallel IO.


A table with the more commonly-used predefined variables is available here

At the end of the configuration step, you should get the following output:

                     AM C Flags:
               Shared C Library: yes
               Static C Library: yes

                        Fortran: no
                            C++: no
                           Java: no

                   Parallel HDF5: yes
Parallel Filtered Dataset Writes: yes
              Large Parallel I/O: yes
              High-level library: yes
                    Threadsafety: no
             Default API mapping: v110
  With deprecated public symbols: yes
          I/O filters (external): deflate(zlib)
                      Direct VFD: no
                         dmalloc: no
  Packages w/ extra debug output: none
                     API tracing: no
            Using memory checker: no
 Memory allocation sanity checks: no
          Function stack tracing: no
       Strict file format checks: no
    Optimization instrumentation: no

The line with:

Parallel HDF5: yes

tells us that our HDF5 build supports parallel IO. The Fortran API is not activated but this is not a problem as ABINIT will be interfaced with HDF5 through the Fortran bindings provided by netcdf-fortran. In other words, ABINIT requires netcdf-fortran and not the HDF5 Fortran bindings.

Again, issue make -j NUM followed by make check and finally make install. Note that make check may take some time so you may want to install immediately and run the tests in another terminal so that you can continue with the tutorial.

Now let’s move to netcdf. Download the C version and the Fortran bindings from the netcdf website and unpack the tarball files as usual.

tar -xvf netcdf-c-4.7.3.tar.gz

tar -xvf netcdf-fortran-4.5.2.tar.gz

To compile the C library, use:

cd netcdf-c-4.7.3
./configure --prefix=$HOME/local/ \
            CC=$HOME/local/bin/mpicc \
            LDFLAGS=-L$HOME/local/lib CPPFLAGS=-I$HOME/local/include

where mpicc is used as C compiler (CC environment variable) and we have to specify LDFLAGS and CPPFLAGS as we want to link against our installation of hdf5. At the end of the configuration step, we should obtain

# NetCDF C Configuration Summary

# General
NetCDF Version:     4.7.3
Dispatch Version:       1
Configured On:      Wed Apr  8 00:53:19 CEST 2020
Host System:        x86_64-pc-linux-gnu
Build Directory:    /home/gmatteo/local/src/netcdf-c-4.7.3
Install Prefix:         /home/gmatteo/local

# Compiling Options
C Compiler:     /home/gmatteo/local/bin/mpicc
CPPFLAGS:       -I/home/gmatteo/local/include
LDFLAGS:        -L/home/gmatteo/local/lib
Shared Library:     yes
Static Library:     yes
Extra libraries:    -lhdf5_hl -lhdf5 -lm -ldl -lz -lcurl

# Features
NetCDF-2 API:       yes
HDF4 Support:       no
HDF5 Support:       yes
NetCDF-4 API:       yes
NC-4 Parallel Support:  yes
PnetCDF Support:    no
DAP2 Support:       yes
DAP4 Support:       yes
Byte-Range Support: no
Diskless Support:   yes
MMap Support:       no
JNA Support:        no
CDF5 Support:       yes
ERANGE Fill Support:    no
Relaxed Boundary Check: yes

The section:

HDF5 Support:       yes
NetCDF-4 API:       yes
NC-4 Parallel Support:  yes

tells us that configure detected our installation of hdf5 and that support for parallel-IO is activated.

Now use the standard sequence of commands to compile and install the package:

make -j2
make check && make install

Once the installation is completed, use the nc-config executable to inspect the features provided by the library we’ve just installed.

which nc-config

# installation directory
nc-config --prefix

To get a summary of the options used to build the C layer and the available features, use

nc-config --all

This netCDF 4.7.3 has been built with the following features:

  --cc            -> /home/gmatteo/local/bin/mpicc
  --cflags        -> -I/home/gmatteo/local/include
  --libs          -> -L/home/gmatteo/local/lib -lnetcdf
  --static        -> -lhdf5_hl -lhdf5 -lm -ldl -lz -lcurl

nc-config is quite useful as it prints the compiler options required to build C applications requiring netcdf-c (--cflags and --libs). Unfortunately, this tool is not enough for ABINIT as we need the Fortran bindings as well.

To compile the Fortran bindings, execute:

cd netcdf-fortran-4.5.2
./configure --prefix=$HOME/local/ \
            FC=$HOME/local/bin/mpif90 \
            LDFLAGS=-L$HOME/local/lib CPPFLAGS=-I$HOME/local/include

where FC points to our mpif90 wrapper (CC is not needed here). For further info on how to build netcdf-fortran, see the official documentation.

Now issue:

make -j2
make check && make install

To inspect the features activated in our Fortran library, use nf-config instead of nc-config (note the nf- prefix):

which nf-config

# installation directory
nf-config --prefix

To get a summary of the options used to build the Fortran bindings and the list of available features, use

nf-config --all

This netCDF-Fortran 4.5.2 has been built with the following features:

  --cc        -> gcc
  --cflags    ->  -I/home/gmatteo/local/include -I/home/gmatteo/local/include

  --fc        -> /home/gmatteo/local/bin/mpif90
  --fflags    -> -I/home/gmatteo/local/include
  --flibs     -> -L/home/gmatteo/local/lib -lnetcdff -L/home/gmatteo/local/lib -lnetcdf -lnetcdf -ldl -lm
  --has-f90   ->
  --has-f03   -> yes

  --has-nc2   -> yes
  --has-nc4   -> yes

  --prefix    -> /home/gmatteo/local
  --includedir-> /home/gmatteo/local/include
  --version   -> netCDF-Fortran 4.5.2


nf-config is quite handy to pass options to the ABINIT configure script. Instead of typing the full list of libraries (--flibs) and the location of the include files (--fflags) we can delegate this boring task to nf-config using backtick syntax:

NETCDF_FORTRAN_LIBS=`nf-config --flibs`
NETCDF_FORTRAN_FCFLAGS=`nf-config --fflags`

Alternatively, one can simply pass the installation directory (here we use the $(...) syntax):

--with-netcdf-fortran=$(nf-config --prefix)

and then let configure detect NETCDF_FORTRAN_LIBS and NETCDF_FORTRAN_FCFLAGS for us.

How to compile ABINIT

In this section, we finally discuss how to compile ABINIT using the MPI compilers and the libraries installed previously. First of all, download the ABINIT tarball from this page using e.g.


Here we are using version 9.0.2 but you may want to download the latest production version to take advantage of new features and benefit from bug fixes.

Once you got the tarball, uncompress it by typing:

tar -xvzf abinit-9.0.2.tar.gz

Then cd into the newly created abinit-9.0.2 directory. Before actually starting the compilation, type:

./configure --help

and take some time to read the documentation of the different options.

The documentation mentions the most important environment variables that can be used to specify compilers and compilation flags. We already encountered some of these variables in the previous examples:

Some influential environment variables:
  CC          C compiler command
  CFLAGS      C compiler flags
  LDFLAGS     linker flags, e.g. -L<lib dir> if you have libraries in a
              nonstandard directory <lib dir>
  LIBS        libraries to pass to the linker, e.g. -l<library>
  CPPFLAGS    (Objective) C/C++ preprocessor flags, e.g. -I<include dir> if
              you have headers in a nonstandard directory <include dir>
  CPP         C preprocessor
  CXX         C++ compiler command
  CXXFLAGS    C++ compiler flags
  FC          Fortran compiler command
  FCFLAGS     Fortran compiler flags

Besides the standard environment variables: CC, CFLAGS, FC, FCFLAGS etc. the build system also provides specialized options to activate support for external libraries. For libxc, for instance, we have:

            C preprocessing flags for LibXC.
            C flags for LibXC.
            Fortran flags for LibXC.
            Linker flags for LibXC.
            Library flags for LibXC.

According to what we have seen during the compilation of libxc, one should pass to configure the following options:

LIBXC_LIBS="-L$HOME/local/lib -lxcf90 -lxc"

Alternatively, one can use the high-level interface provided by the --with-LIBNAME options to specify the installation directory as in:


In this case, configure will try to automatically detect the other options. This is the easiest approach but if configure cannot detect the dependency properly, you may need to inspect config.log for error messages and/or set the options manually.

In the previous examples, we executed configure in the top level directory of the package but for ABINIT we prefer to do things in a much cleaner way using a build directory The advantage of this approach is that we keep object files and executables separated from the source code and this allows us to build different executables using the same source tree. For example, one can have a build directory with a version compiled with gfortran and another build directory for the intel ifort compiler or other builds done with same compiler but different compilation options.

Let’s call the build directory build_gfortran:

mkdir build_gfortran && cd build_gfortran

Now we should define the options that will be passed to the configure script. Instead of using the command line as done in the previous examples, we will be using an external file (myconf.ac9) to collect all our options. The syntax to read options from file is:

../configure --with-config-file="myconf.ac9"

where double quotation marks may be needed for portability reasons. Note the use of ../configure as we are working inside the build directory build_gfortran while the configure script is located in the top level directory of the package.


The name of the options in myconf.ac9 is in normalized form that is the initial -- is removed from the option name and all the other - characters in the string are replaced by an underscore _. Following these simple rules, the configure option --with-mpi becomes with_mpi in the ac9 file.

Also note that in the configuration file it is possible to use shell variables and reuse the output of external tools using backtick syntax as is nf-config --flibs or, if you prefer, ${nf-config --flibs}. This tricks allow us to reduce the amount of typing and have configuration files that can be easily reused for other machines.

This is an example of configuration file in which we use the high-level interface (with_LIBNAME=dirpath) as much as possible, except for linalg and FFTW3. The explicit value of LIBNAME_LIBS and LIBNAME_FCFLAGS is also reported in the commented sections.

# -------------------------------------------------------------------------- #
# MPI support                                                                #
# -------------------------------------------------------------------------- #

#   * the build system expects to find subdirectories named bin/, lib/,
#     include/ inside the with_mpi directory

# Flavor of linear algebra libraries to use (default is netlib)

# Library flags for linear algebra (default is unset)
LINALG_LIBS="-L$HOME/local/lib -lopenblas"

# -------------------------------------------------------------------------- #
# Optimized FFT support                                                      #
# -------------------------------------------------------------------------- #

# Flavor of FFT framework to support (default is auto)
# The high-level interface does not work yet so we pass options explicitly

# Explicit options for fftw3
FFTW3_LIBS="-L$HOME/local/lib -lfftw3f -lfftw3"

# -------------------------------------------------------------------------- #
# LibXC
# -------------------------------------------------------------------------- #
# Install prefix for LibXC (default is unset)

# Explicit options for libxc
#LIBXC_LIBS="-L$HOME/local/lib -lxcf90 -lxc"

# -------------------------------------------------------------------------- #
# NetCDF
# -------------------------------------------------------------------------- #

# install prefix for NetCDF (default is unset)
with_netcdf=$(nc-config --prefix)
with_netcdf_fortran=$(nf-config --prefix)

# Explicit options for netcdf
#NETCDF_FORTRAN_LIBS=`nf-config --flibs`
#NETCDF_FORTRAN_FCFLAGS=`nf-config --fflags`

# install prefix for HDF5 (default is unset)

# Explicit options for hdf5
#HDF5_LIBS=`nf-config --flibs`
#HDF5_FCFLAGS=`nf-config --fflags`

# Enable OpenMP (default is no)

A documented template with all the supported options can be found here

Copy the content of the example in myconf.ac9, then run:

../configure --with-config-file="myconf.ac9"

If everything goes smoothly, you should obtain the following summary:

=== Final remarks                                                          ===

Core build parameters

  * C compiler       : gnu version 5.3
  * Fortran compiler : gnu version 5.3
  * architecture     : intel xeon (64 bits)
  * debugging        : basic
  * optimizations    : standard

  * OpenMP enabled   : no (collapse: ignored)
  * MPI    enabled   : yes (flavor: auto)
  * MPI    in-place  : no
  * MPI-IO enabled   : yes
  * GPU    enabled   : no (flavor: none)

  * LibXML2 enabled  : no
  * LibPSML enabled  : no
  * XMLF90  enabled  : no
  * HDF5 enabled     : yes (MPI support: yes)
  * NetCDF enabled   : yes (MPI support: yes)
  * NetCDF-F enabled : yes (MPI support: yes)

  * FFT flavor       : fftw3 (libs: user-defined)
  * LINALG flavor    : openblas (libs: user-defined)

  * Build workflow   : monolith

0 deprecated options have been used:.

Configuration complete.
You may now type "make" to build Abinit.
(or "make -j<n>", where <n> is the number of available processors)


Please take your time to read carefully the final summary and make sure you are getting what you expect. A lot of typos or configuration errors can be easily spotted at this level.

You might then find useful to have a look at other examples available in this page. Additional configuration files for clusters can be found in the abiconfig package.

The configure script has generated several Makefiles required by make as well as the config.h include file with all the pre-processing options that will be used to build ABINIT. This file is included in every ABINIT source file and it defines the features that will be activated or deactivated at compilation-time depending on the libraries available on your machine. Let’s have a look at a selected portion of config.h:

/* Define to 1 if you have a working MPI installation. */
#define HAVE_MPI 1

/* Define to 1 if you have a MPI-1 implementation (obsolete, broken). */
/* #undef HAVE_MPI1 */

/* Define to 1 if you have a MPI-2 implementation. */
#define HAVE_MPI2 1

/* Define to 1 if you want MPI I/O support. */
#define HAVE_MPI_IO 1

/* Define to 1 if you have a parallel NetCDF library. */
/* #undef HAVE_NETCDF_MPI */

This file tells us that

  • we are building ABINIT with MPI support
  • we have a library implementing the MPI2 specifications
  • our MPI implementation supports parallel MPI-IO. Note that this does not mean that netcdf supports MPI-IO. In this example, indeed, HAVE_NETCDF_MPI is undefined and this means the library does not have parallel-IO capabilities.

Of course, end users are mainly concerned with the final summary reported by the configure script to understand whether a particular feature has been activated or not but more advanced users may find the content of config.h valuable to understand what’s going on.

Now we can finally compile the package with e.g. make -j2. If the compilation completes successfully (🙌), you should end up with a bunch of executables inside src/98_main. Note, however, that the fact that the compilation completed successfully does not necessarily imply that the executables will work as expected as there are many different things that can go wrong at runtime.

First of all, let’s try to execute:

abinit --version


If this is a parallel build, you may need to use

mpirun -n 1 abinit --version

even for a sequential run as certain MPI libraries are not able to bootstrap the MPI library without mpirun (mpiexec). On some clusters with Slurm, the syadmin may ask you to use srun instead of mpirun.

To get the summary of options activated during the build, run abinit with the -b option (or --build if you prefer the verbose version)

./src/98_main/abinit -b

If the executable does not crash (🙌), you may want to execute

make test_fast

to run some basic tests. If something goes wrong when executing the binary or when running the tests, checkout the Troubleshooting section for possible solutions.

Finally, you may want to execute the python script in the tests directory in order to validate the build before running production calculations:

cd tests
../../tests/ v1 -j4

As usual, use:

../../tests/ --help

to list the available options. A more detailed discussion is given in this page.


Dynamic libraries and ldd

Since we decided to compile with dynamic linking, the external libraries are not included in the final executables. Actually, the libraries will be loaded by the Operating System (OS) at runtime when we execute the binary. The OS will search for dynamic libraries using the list of directories specified in $LD_LIBRARY_PATH ($DYLD_LIBRARY_PATH for MacOs).

A typical mistake is to execute abinit with a wrong $LD_LIBRARY_PATH that is either empty or different from the one used when compiling the code (if it’s different and it works, I assume you know what you are doing so you should not be reading this section!)

On Linux, one can use the ldd tool to print the shared objects (shared libraries) required by each program or shared object specified on the command line:

ldd src/98_main/abinit (0x00007fffbe7a4000) => /home/gmatteo/local/lib/ (0x00007fc892155000) => /home/gmatteo/local/lib/ (0x00007fc891ede000) => /home/gmatteo/local/lib/ (0x00007fc891b62000) => /home/gmatteo/local/lib/ (0x00007fc89193c000) => /home/gmatteo/local/lib/ (0x00007fc891199000) => /lib64/ (0x00007fc890f74000) => /lib64/ (0x00007fc890d70000) => /lib64/ (0x00007fc890a43000) => /lib64/ (0x00007fc890741000) => /home/gmatteo/local/lib/ (0x00007fc89050a000) => /home/gmatteo/local/lib/ (0x00007fc88ffb9000) => /lib64/ (0x00007fc88fd7a000) => /lib64/ (0x00007fc88fb63000) => /lib64/ (0x00007fc88f7a1000)

As expected, our executable uses the openblas, netcdf, hdf5, mpi libraries installed in $HOME/local/lib plus other basic libs coming from lib64(e.g. libgfortran) added by the compiler.


On MacOsX, replace ldd with otool and the syntax:

otool -L abinit

If you see entries like:

/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib (compatibility version 1.0.0, current version 1.0.0)
/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib (compatibility version 1.0.0, current version 1.0.0)

it means that you are linking against MacOsx VECLIB. In this case, make sure to use --enable-zdot-bugfix="yes" when configuring the package otherwise the code will crash at runtime due to ABI incompatibility (calling conventions for functions returning complex values). Did I tell you that MacOsX does not care about Fortran? If you wonder about the difference between API and ABI, please read this stackoverflow post.

To understand why LD_LIBRARY_PATH is so important, let’s try to reset the value of this variable with


then rerun ldd (or otool) again. Do you understand what’s happening here? Why it’s not possible to execute abinit with an empty $LD_LIBRARY_PATH? How would you fix the problem?


Problems can appear at different levels:

  • configuration time
  • compilation time
  • runtime i.e. when executing the code

Configuration-time errors are usually due to misconfiguration of the environment, missing (hard) dependencies or critical problems in the software stack that will make configure abort. Unfortunately, the error message reported by configure is not always self-explanatory. To pinpoint the source of the problem you will need to search for clues in config.log, especially the error messages associated to the feature/library that is triggering the error.

This is not as easy as it looks since configure sometimes performs multiple tests to detect your architecture and some of these tests are supposed to fail. As a consequence, not all the error messages reported in config.log are necessarily relevant. Even if you find the test that makes configure abort, the error message may be obscure and difficult to decipher. In this case, you can ask for help on the forum but remember to provide enough info on your architecture, the compilation options and, most importantly, a copy of config.log. Without this file, indeed, it is almost impossible to understand what’s going on.

An example will help. Let’s assume we are compiling on a cluster using modules provided by our sysadmin. More specifically, there is an openmpi_intel2013_sp1.1.106 module that is supposed to provide the openmpi implementation of the MPI library compiled with a particular version of the intel compiler (remember what we said about using the same version of the compiler). Obviously we need to load the modules before running configure in order to setup our environment so we issue:

module load openmpi_intel2013_sp1.1.106

The module seems to work as no error message is printed to the terminal and which mpicc shows that the compiler has been added to $PATH. At this point we try to configure ABINIT with:


where $MPI_HOME is a environment variable set by module load (use e.g. env | grep MPI). Unfortunately, the configure script aborts at the very beginning complaining that the C compiler does not work!

checking for gcc... /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc
checking for C compiler default output file name...
configure: error: in `/home/gmatteo/abinit/build':
configure: error: C compiler cannot create executables
See `config.log' for more details.

Let’s analyze the output of configure. The line:

checking for gcc... /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc

indicates that configure was able to find mpicc in ${MPI_HOME}/bin. Then an internal test is executed to make sure the wrapper can compile a rather simple Fortran program using MPI but the test fails and configure aborts immediately with the pretty explanatory message:

configure: error: C compiler cannot create executables
See `config.log' for more details.

If we want to understand why configure failed, we have to open config.log in the editor and search for error messages towards the end of the log file. For example one can search for the string “C compiler cannot create executables”. Immediately above this line, we find the following section:

configure:12104: checking whether the C compiler works
configure:12126: /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc conftest.c  >&5
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_reg_xrc_rcv_qp@IBVERBS_1.1'
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_modify_xrc_rcv_qp@IBVERBS_1.1'
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_open_xrc_domain@IBVERBS_1.1'
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_unreg_xrc_rcv_qp@IBVERBS_1.1'
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_query_xrc_rcv_qp@IBVERBS_1.1'
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_create_xrc_rcv_qp@IBVERBS_1.1'
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_create_xrc_srq@IBVERBS_1.1'
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_close_xrc_domain@IBVERBS_1.1'
configure:12130: $? = 1
configure:12168: result: no
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_TARNAME "abinit"
| #define PACKAGE_VERSION "9.1.2"
| #define PACKAGE_STRING "ABINIT 9.1.2"
| #define PACKAGE_URL ""
| #define PACKAGE "abinit"
| #define VERSION "9.1.2"
| #define ABINIT_VERSION "9.1.2"
| #define ABINIT_VERSION_BUILD "20200824"
| #define ABINIT_VERSION_BASE "9.1"
| #define HAVE_OS_LINUX 1
| /* end confdefs.h.  */
| int
| main ()
| {
|   ;
|   return 0;
| }

The line

configure:12126: /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc conftest.c  >&5

tells us that configure tried to compile a C file named conftest.c and that the return value stored in the $? shell variable is non-zero thus indicating failure:

configure:12130: $? = 1
configure:12168: result: no

The failing program (the C main after the line “configure: failed program was:”) is a rather simple piece of code and our mpicc compiler is not able to compile it! If we look more carefully at the lines after the invocation of mpicc, we see lots of undefined references to functions of the libibverbs library:

configure:12126: /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc conftest.c  >&5
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/ undefined reference to `ibv_reg_xrc_rcv_qp@IBVERBS_1.1

This looks like some mess in the system configuration and not necessarily a problem in the ABINIT build system. Perhaps there have been changes to the environment, maybe a system upgrade or the module is simply broken. In this case you should send the config.log to the sysadmin so that he/she can fix the problem or just use another more recent module.

Obviously, one can encounter cases in which modules are properly configured yet the configure script aborts because it does not know how to deal with your software stack. In both cases, config.log is key to pinpoint the problem and sometimes you will find that the problem is rather simple to solve. For instance, you are using a Fortran module files produced by gfortran while trying to compile with the intel compiler or perhaps you are trying to use modules produced by a different version of the same compiler. Perhaps you forgot to add the include directory required by an external library and the compiler cannot find the include file or maybe there is a typo in the configuration options. The take-home message is that several mistakes can be detected by just inspecting the log messages reported in configure.log if you know how to search for them.

Compilation-time errors are usually due to syntax errors, portability issues or Fortran constructs that are not supported by that particular version of the compiler. In the first two cases, please report the problem on the forum. In the later case, you will need a more recent version of the compiler. Sometimes the compilation aborts with an internal compiler error that should be considered as a bug in the compiler rather than an error in the ABINIT source code. Decreasing the optimization level when compiling the particular routine that triggers the error (use -O1 or even -O0 for the most problematic cases) may solve the problem else try a more recent version of the compiler. If you have made non-trivial changes in code (modifications in the datatypes/interfaces), run make clean and recompile.

Runtime errors are more difficult to fix as they may require the use of a debugger and some basic understanding of Linux signals. Here we focus on two common scenarios: SIGILL and SIGSEGV.

If the code raises the SIGILL signal, it means that the CPU attempted to execute an instruction it didn’t understand. Very likely, your executables/libraries have been compiled for the wrong architecture. This may happen on clusters when the CPU family available on the frontend differs from the one available on the compute node and aggressive optimization options (-O3, -march, -xHost, etc) are used. Removing the optimization options and using the much safer -O2 level may help. Alternatively, one can configure and compile the source directly on the compute node or use compilation options compatible both with the frontend and the compute node (ask your sysadmin for details).


Never ever run calculations on CPUs belonging to different families unless you know what you are doing. Many MPI codes assume reproducibility at the binary level: on different MPI processes the same set of bits in input should produce the same set of bits in output. If you are running on a heterogeneous cluster, select the queue with the same CPU family and make sure the code has been compiled with options that are compatibile with the compute node.

Segmentation faults (SIGSEGV) are usually due to bugs in the code but they may also be triggered by non-portable code or misconfiguration of the software stack. When reporting this kind of problem on the forum, please add an input file so that developers can try to reproduce the problem. Keep in mind, however, that the problem may not be reproducible on other architectures. The ideal solution would be to run the code under the control of the debugger, use the backtrace to locate the line of code where the segmentation fault occurs and then attach the backtrace to your issue on the forum.

How to run gdb

Using the debugger in sequential is really simple. First of all, make sure the code have been compiled with the -g option to generate source-level debug information. To use the gdb GNU debugger, perform the following operations:

  1. Load the executable in the GNU debugger using the syntax:

    gdb path_to_abinit_executable
  2. Run the code with the run command and pass the input file as argument:

    (gdb) run
  3. Wait for the error e.g. SIGSEGV, then print the backtrace with:

    (gdb) bt

PS: avoid debugging code compiled with -O3 or -Ofast as the backtrace may not be reliable. Sometimes, even -O2 (default) is not reliable and you have to resort to print statements and bisection to braket the problematic piece of code.

How to compile ABINIT on a cluster with the intel toolchain and modules

On intel-based clusters, we suggest to compile ABINIT with the intel compilers (icc and ifort) and MKL in order to achieve better performance. The MKL library, indeed, provides highly-optimized implementations for BLAS, LAPACK, FFT, and SCALAPACK that can lead to a significant speedup while simplifying considerably the compilation process. As concerns MPI, intel provides its own implementation (Intel MPI) but it is also possible to employ openmpi or mpich provided these libraries have been compiled with the same intel compilers.

In what follows, we assume a cluster in which scientific software is managed with modules and the EasyBuild framework. Before proceeding with the next steps, it is worth summarizing the most important module commands.

module commands

To list the modules installed on the cluster, use:

module avail

The syntax to load the module MODULE_NAME is:

module load MODULE_NAME


module list

prints the list of modules currently loaded.

To list all modules containing “string”, use:

module spider string  # requires LMOD with LUA


module show MODULE_NAME

shows the commands in the module file (useful for debugging). For a more complete introduction to environment modules, please consult this page.

On my cluster, I can activate intel MPI by executing:

module load releases/2018b
module load intel/2018b
module load iimpi/2018b

to load the 2018b intel MPI EasyBuild toolchain. On your cluster, you may need to load different modules but the effect at the level of the shell environment should be the same. More specifically, mpiifort is now in PATH (note how mpiifort wraps intel ifort):

mpiifort -v
mpiifort for the Intel(R) MPI Library 2018 Update 3 for Linux*
Copyright(C) 2003-2018, Intel Corporation.  All rights reserved.
ifort version 18.0.3  

the directories with the libraries required by the compiler/MPI have been added to LD_LIBRARY_PATH while CPATH stores the locations to search for include file. Last but not least, the env should now define intel-specific variables whose name starts with I_:

$ env | grep I_

Since I_MPI_ROOT points to the installation directory of intel MPI, we can use this environment variable to tell configure how to locate our MPI installation:


FC="mpiifort"  # Use intel wrappers. Important!
CC="mpiicc"    # See warning below

# with_optim_flavor="aggressive"
# FCFLAGS="-g -O2"

Optionally, you can use with_optim_flavor="aggressive to let configure select compilations options tuned for performance or set the options explicitly via FCFLAGS.


Intel MPI installs two sets of MPI wrappers. (mpiicc, mpicpc, mpiifort) and (mpicc, mpicxx, mpif90) that use Intel compilers and GNU compilers, respectively. Use the -show option (e.g. mpif90 -show) to display the underlying compiler. As expected

$ mpif90 -v

mpif90 for the Intel(R) MPI Library 2018 Update 3 for Linux*
Thread model: posix
gcc version 7.3.0 (GCC)

shows that mpif90 wraps GNU gfortran. Unless you really need to use GNU compilers, we strongly suggest the wrappers based on the Intel compilers (mpiicc, mpicpc, mpiifort)

If we run configure with these options, we should see a section at the beginning in which the build system is testing basic capabilities of the Fortran compiler. If configure stops at this level it means there’s a severe problem with your toolchain.

 === Fortran support                                                        ===

checking for mpiifort... /opt/cecisw/arch/easybuild/2018b/software/impi/2018.3.222-iccifort-2018.3.222-GCC-7.3.0-2.30/bin64/mpiifort
checking whether we are using the GNU Fortran compiler... no
checking whether mpiifort accepts -g... yes
checking which type of Fortran compiler we have... intel 18.0

Then we have a section in which configure tests the MPI implementation:

Multicore architecture support
=== Multicore architecture support                                         ===

checking whether to enable OpenMP support... no
checking whether to enable MPI... yes
checking how MPI parameters have been set... yon
checking whether the MPI C compiler is set... yes
checking whether the MPI C++ compiler is set... yes
checking whether the MPI Fortran compiler is set... yes
checking for MPI C preprocessing flags...
checking for MPI C flags...
checking for MPI C++ flags...
checking for MPI Fortran flags...
checking for MPI linker flags...
checking for MPI library flags...
checking whether the MPI C API works... yes
checking whether the MPI C environment works... yes
checking whether the MPI C++ API works... yes
checking whether the MPI C++ environment works... yes
checking whether the MPI Fortran API works... yes
checking whether the MPI Fortran environment works... yes
checking whether to build MPI I/O code... auto
checking which level of MPI is supported by the Fortran compiler... 2
configure: forcing MPI-2 standard level support
checking whether the MPI library supports MPI_INTEGER16... yes
checking whether the MPI library supports MPI_CREATE_TYPE_STRUCT... yes
checking whether the MPI library supports MPI_IBCAST (MPI3)... yes
checking whether the MPI library supports MPI_IALLGATHER (MPI3)... yes
checking whether the MPI library supports MPI_IALLTOALL (MPI3)... yes
checking whether the MPI library supports MPI_IALLTOALLV (MPI3)... yes
checking whether the MPI library supports MPI_IGATHERV (MPI3)... yes
checking whether the MPI library supports MPI_IALLREDUCE (MPI3)... yes
configure: dumping all MPI parameters for diagnostics
configure: ------------------------------------------
configure: Configure options:
configure:   * enable_mpi_inplace = ''
configure:   * enable_mpi_io      = ''
configure:   * with_mpi           = 'yes'
configure:   * with_mpi_level     = ''
configure: Internal parameters
configure:   * MPI enabled (required)                       : yes
configure:   * MPI C compiler is set (required)             : yes
configure:   * MPI C compiler works (required)              : yes
configure:   * MPI Fortran compiler is set (required)       : yes
configure:   * MPI Fortran compiler works (required)        : yes
configure:   * MPI environment usable (required)            : yes
configure:   * MPI C++ compiler is set (optional)           : yes
configure:   * MPI C++ compiler works (optional)            : yes
configure:   * MPI-in-place enabled (optional)              : no
configure:   * MPI-IO enabled (optional)                    : yes
configure:   * MPI configuration type (computed)            : yon
configure:   * MPI Fortran level supported (detected)       : 2
configure:   * MPI_Get_library_version available (detected) : unknown
configure: All required parameters must be set to 'yes'.
configure: If not, the configuration and/or the build with
configure: MPI support will very likely fail.
checking whether to activate GPU support... no

So far so good. Our compilers and MPI seem to work so we can proceed with the setup of the external libraries.

On my cluster, module load intel/2018b has also defined the MKLROOT env variable

env | grep MKL


that can be used in conjunction with the highly recommended mkl-link-line-advisor to link with MKL. On other clusters, you may need load an mkl module explicitly (or composerxe or parallel-studio-xe)

Let’s now discuss how to configure ABINIT with MKL starting from the simplest cases:

  • BLAS and Lapack from MKL
  • FFT from MKL DFTI
  • no Scalapack
  • no OpenMP threads.

These are the options I have to select in the mkl-link-line-advisor to enable this configuration with my software stack:

The options should be self-explanatory. Perhaps the tricky part is Select interface layer where one should select 32-bit integer. This simply means that we are compiling and linking code in which default integer is 32-bits wide (default behaviour). Note how the threading layer is set to Sequential (no OpenMP threads) and how we chose to link with MKL libraries explicitly the get the full link line and compiler options.

Now we can use these options in our configuration file:


LINALG_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"

# FFT from MKL

FFT_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"


Do not use MKL with FFTW3 for the FFT as the MKL library exports the same symbols as FFTW. This means that the linker will receive multiple definitions for the same procedure and the behaviour is undefined! Use either MKL or FFTW3 with e.g. openblas.

If we run configure with these options, we should obtain the following output in the Linear algebra support section:

Linear algebra support
=== Linear algebra support                                                 ===

checking for the requested linear algebra flavor... mkl
checking for the serial linear algebra detection sequence... mkl
checking for the MPI linear algebra detection sequence... mkl
checking for the MPI acceleration linear algebra detection sequence... none
checking how to detect linear algebra libraries... verify
checking for BLAS support in the specified libraries... yes
checking for AXPBY support in the BLAS libraries... yes
checking for GEMM3M in the BLAS libraries... yes
checking for mkl_imatcopy in the specified libraries... yes
checking for mkl_omatcopy in the specified libraries... yes
checking for mkl_omatadd in the specified libraries... yes
checking for mkl_set/get_threads in the specified libraries... yes
checking for LAPACK support in the specified libraries... yes
checking for LAPACKE C API support in the specified libraries... no
checking for PLASMA support in the specified libraries... no
checking for BLACS support in the specified libraries... no
checking for ELPA support in the specified libraries... no
checking how linear algebra parameters have been set... env (flavor: kwd)
checking for the actual linear algebra flavor... mkl
checking for linear algebra C preprocessing flags... none
checking for linear algebra C flags... none
checking for linear algebra C++ flags... none
checking for linear algebra Fortran flags... -I/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/include
checking for linear algebra linker flags... none
checking for linear algebra library flags... -L/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl
configure: WARNING: parallel linear algebra is not available

Excellent, configure detected a working BLAS/Lapack installation, plus some MKL extensions (mkl_imatcopy etc). BLACS and Scalapack (parallel linear algebra) have not been detected but this is expected as we haven’t asked for these libraries in the mkl-link-line-advisor GUI.

This is the section in which configure checks the presence of the FFT library (DFTI from MKL, goedecker means internal Fortran version).

Optimized FFT support
=== Optimized FFT support                                                  ===

checking which FFT flavors to enable... dfti goedecker
checking for FFT flavor... dfti
checking for FFT C preprocessing flags...
checking for FFT C flags...
checking for FFT Fortran flags...
checking for FFT linker flags...
checking for FFT library flags...
checking for the FFT flavor to try... dfti
checking whether to enable DFTI... yes
checking how DFTI parameters have been set... mkl
checking for DFTI C preprocessing flags... none
checking for DFTI C flags... none
checking for DFTI Fortran flags... -I/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/include
checking for DFTI linker flags... none
checking for DFTI library flags... -L/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl
checking whether the DFTI library works... yes
checking for the actual FFT flavor to use... dfti

The line

checking whether the DFTI library works... yes

tells us that DFTI has been found and we can link against it although this does not necessarily mean that the final executable will work out of the box.


You may have noticed that it is also possible to use MKL with GNU gfortran but in this case you need to use a different set of libraries including the so-called compatibility layer that allows GCC code to call MKL routines. Also, MKL Scalapack requires either Intel MPI or MPICH2.

Optional Exercise

Compile ABINIT with BLAS/ScalaPack from MKL. Scalapack (or ELPA) may lead to a significant speedup when running GS calculations with large nband. See also the np_slk input variable.

How to compile libxc, netcdf4/hdf5 with intel

At this point, one should check whether our cluster provides modules for libxc, netcdf-fortran, netcdf-c and hdf5 compiled with the same toolchain. Use module spider netcdf or module keyword netcdf to find the modules (if any).

Hopefully, you will find a pre-existent installation for netcdf and hdf5 (possibly with MPI-IO support) as these libraries are quite common on HPC centers. Load these modules to have nc-config and nf-config in your $PATH and then use the --prefix option to specify the installation directory as done in the previous examples. Unfortunately, libxc and hdf5 do not provide similar tools so you will have to find the installation directory for these libs and pass it to configure.


You may encounter problems with libxc as this library is rather domain-specific and not all the HPC centers install it. If your cluster does not provide libxc, it should not be that difficult to reuse the expertise acquired in this tutorial to build your version and then install the missing dependencies inside $HOME/local. Just remember to:

  1. load the correct modules for MPI with the associated compilers before configuring
  2. configure with CC=mpiicc and FC=mpiifort so that the intel compilers are used
  3. install the libraries and prepend $HOME/local/lib to LD_LIBRARY_PATH
  4. use the with_LIBNAME option in conjunction with $HOME/local/lib in the ac9 file.
  5. run configure with the ac9 file.

In the worst case scenario in which neither netcdf4/hdf5 nor libxc are installed, you may want to use the internal fallbacks. The procedure goes as follows.

  • Start to configure with a minimalistic set of options just for MPI and MKL (linalg and FFT)
  • The build system will detect that some hard dependencies are missing and will generate a script in the fallbacks directory.
  • Execute the script to build the missing dependencies using the toolchain specified in the initial configuration file
  • Finally, reconfigure ABINIT with the fallbacks.

How to compile ABINIT with support for OpenMP threads


For a quick introduction to MPI and OpenMP and a comparison between the two parallel programming paradigms, see this presentation.

Compiling ABINIT with OpenMP is not that difficult as everything boils down to:

  • Using a threaded version for BLAS, LAPACK and FFTs
  • Passing enable_openmp=”yes” to the ABINIT configure script so that OpenMP is activated also at level of the ABINIT Fortran code.

On the contrary, answering the questions:

  • When and why should I use OpenMP threads for my calculations?
  • How many threads should I use and what is the parallel speedup I should expect?

is much more difficult as there are several factors that should be taken into account.


To keep a long story short, one should use OpenMP threads when we start to trigger limitations or bottlenecks in the MPI implementation, especially at the level of the memory requirements or in terms of parallel scalability. These problems are usually observed in calculations with large natom, %mpw, nband.

As a matter of fact, it does not make sense to compile ABINIT with OpenMP if your calculations are relatively small. Indeed, ABINIT is mainly designed with MPI-parallelism in mind. For instance, calculations done with a relatively large number of \kk-points will benefit more of MPI than OpenMP, especially if the number of MPI processes divides the number of \kk-points exactly. Even worse, do not compile the code with OpenMP support if you do not plan to use threads because the OpenMP version will have an additional overhead due to the creation of the threaded sections.

Remember also that increasing the number of threads does not necessarily leads to faster calculations (the same is true for MPI processes). There’s always an optimal value for the number of threads (MPI processes) beyond which the parallel efficiency starts to deteriorate. Unfortunately, this value is strongly hardware and software dependent so you will need to benchmark the code before running production calculations.

Last but not least, OpenMP threads are not necessarily Posix threads. Hence if you have a library that provides both Open and Posix-threads, link with the OpenMP version.

After this necessary preamble, let’s discuss how to compile a threaded version. To activate OpenMP support in the Fortran routines of ABINIT, pass


to the configure script via the configuration file. This will automatically activate the compilation option needed to enable OpenMP in the ABINIT source code (e..g. -fopenmp option for gfortran) and the CPP variable HAVE_OPENMP in config.h. Note that this option is just part of the story as a significant fraction of the wall-time is spent in the external BLAS/FFT routines so do not expect big speedups if you do not link against threaded libraries.

If you are building your own software stack for BLAS/LAPACK and FFT, you will have to reconfigure with the correct options for the OpenMP version and then issue make and make install again to build the threaded version. Also note that some libraries may change. FFTW3, for example, ships the OpenMP version in libfftw3_omp (see the official documentation) hence the list of libraries in FFTW3_LIBS should be changed accordingly.

Life is much easier if you are using intel MKL because in this case it is just a matter of selecting OpenMP threading as threading layer in the mkl-link-line-advisor interface and then pass these options to the ABINIT build system together with enable_openmp="yes".


When using threaded libraries remember to set explicitly the number of threads with e.g.


either in your bash_profile or in the submission script (or in both). By default, OpenMP uses all the available CPUs so it is very easy to overload the machine, especially if one uses threads in conjunction with MPI processes.

When running threaded applications with MPI, we suggest to allocate a number of physical CPUs that is equal to the number of MPI processes times the number of OpenMP threads. Computational intensive applications such as DFT codes have less chance to be improved in performance from Hyper-Threading technology (usually referred to as number of logical CPUs).

We also recommend to increase the stack size limit using e.g.

ulimit -s unlimited

if the sysadmin allows you to do so.

To run the ABINIT test suite with e.g. two OpenMP threads, use the -o2 option of