First tutorial on the Projector AugmentedWave (PAW) technique¶
Projector AugmentedWave technique, how to use it?¶
This tutorial aims at showing how to perform a calculation within the Projector AugmentedWave (PAW) method.
You will learn how to launch a PAW calculation and what are the main input variables that govern convergence and numerical efficiency. You are supposed to know how to use ABINIT with NormConserving PseudoPotentials (NCPP).
Important
All the necessary input files to run the examples can be found in the ~abinit/tests/ directory where ~abinit is the absolute path of the abinit toplevel directory.
To execute the tutorials, you are supposed to create a working directory (Work*
) and
copy there the input files and the files file of the lesson.
The files file ending with _x (e.g. tbase1_x.files) must be edited every time you start to use a new input file. You will discover more about the files file in section 1.1 of the help file.
To make things easier, we suggest to define some handy environment variables by executing the following lines in the terminal:
export ABI_HOME=Replace_with_the_absolute_path_to_the_abinit_top_level_dir export ABI_TESTS=$ABI_HOME/tests/ export ABI_TUTORIAL=$ABI_TESTS/tutorial/ # Files for base1234, GW ... export ABI_TUTORESPFN=$ABI_TESTS/tutorespfn/ # Files specific to DFPT tutorials. export ABI_TUTOPARAL=$ABI_TESTS/tutoparal/ # Tutorials about parallel version export ABI_TUTOPLUGS=$ABI_TESTS/tutoplugs/ # Examples using external libraries. export ABI_PSPDIR=$ABI_TESTS/Psps_for_tests/ # Pseudos used in examples. export PATH=$ABI_HOME/src/98_main/:$PATH
The examples in this tutorial will use these shell variables so that one can easily copy and paste the code snippets into the terminal (remember to set ABI_HOME first!)
The last line adds the directory containing the executables to your PATH so that one can invoke the code by simply typing abinit in the terminal instead of providing the absolute path.
Finally, to run the examples in parallel with e.g. 2 MPI processes, use mpirun (mpiexec) and the syntax:
mpirun n 2 abinit < files_file > log 2> err
The standard output of the application is redirected to log
while err
collects the standard error
(runtime error messages, if any, are written here).
This tutorial should take about 1.5 hour.
1. Summary of the PAW method¶
The Projector AugmentedWave approach has been introduced by Peter Blochl in 1994:
The Projector AugmentedWave method is an extension of augmented wave methods and the pseudopotential approach, which combines their traditions into a unified electronic structure method”.
It is based on a linear and invertible transformation (the PAW transformation) that connects the “true” wavefunctions \Psi with “auxiliary” (or “pseudo”) soft wavefunctions \tPsi:
This relation is based on the definition of augmentation regions (atomic spheres of radius r_c), around the atoms in which the partial waves \phi_i\rangle form a basis of atomic wavefunctions; \tphi_i\rangle are pseudized partial waves (obtained from \phi_i\rangle), and \tprj_i\rangle are dual functions of the \tphi_i\rangle called projectors. It is therefore possible to write every quantity depending on \Psi_n (density, energy, Hamiltonian) as a function of \tPsi_n and to find \tPsi_n by solving selfconsistent equations.
The PAW method has two main advantages:
 From \tPsi, it is always possible to obtain the true all electron wavefunction \Psi,
 The convergence is comparable to an UltraSoft PseudoPotential (USPP) one.
From a practical point of view, a PAW calculation is rather similar to a NormConserving PseudoPotential one. Most noticeably, one will have to use a special atomic data file (PAW dataset) that contains the \phi_i, \tphi_i and \tprj_i and that plays the same role as a pseudopotential file.
Tip
It is highly recommended to read the following papers to understand correctly the basic concepts of the PAW method: [Bloechl1994] and [Kresse1999]. The implementation of the PAW method in ABINIT is detailed in [Torrent2008], describing specific notations and formulations
2. Using PAW with ABINIT¶
Before continuing, you might consider to work in a different subdirectory as
for the other tutorials. Why not Work_paw1?
In what follows, the name of files are mentioned as if you were in this subdirectory.
All the input files can be found in the $ABI_TUTORIAL/Input
directory.
Important
You can compare your results with reference output files located in
$ABI_TUTORIAL/Refs
and $ABI_TUTORIAL/Refs/tpaw1_addons
directories (for the present tutorial they are named tpaw1_*.out
).
The input file tpaw1_1.in is an example of a file to be used to compute
the total energy of diamond at the experimental volume (within the
LDA exchangecorrelation functional). You might use the file tpaw1_1.files
(with a standard NormConserving PseudoPotential) as a files file, and get
the corresponding output file (it is available in $ABI_TUTORIAL/Refs/tpaw1_1.out).
Copy the files tpaw1_1.in and tpaw1_1.files in your work directory,
cd $ABI_TUTORIAL/Input mkdir Work_paw1 cd Work_paw1 cp ../tpaw1_1.files . cp ../tpaw1_1.in .
and execute:
abinit < tpaw1_1.files > log 2> err &
The code should run very quickly. In the meantime, you can read the input file and see that there is no PAW input variable.
tpaw1_1.in tpaw1_1.out tpaw1_1i tpaw1_1o tpaw1_1tmp ../../../Psps_for_tests/6c.pspnc #PAW: Replace the last line by the following ../../../Psps_for_tests/6c.lda.atompaw
# ======================================= # Input for PAW1 tutorial # Diamond at experimental volume # ======================================= #Cell and atoms definition acell 3*3.567 angstrom rprim 0.0 1/2 1/2 1/2 0.0 1/2 1/2 1/2 0.0 ntypat 1 natom 2 typat 2*1 xred 0.0 0.0 0.0 1/4 1/4 1/4 znucl 6 nband 6 #Grid definitions ecut 15. ecutsm 0.5 #SCF cycle parameters tolvrs 1.0d10 nstep 20 #Kpoints and sym nsym 0 occopt 1 ngkpt 6 6 6 nshiftk 4 shiftk 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.5 #I/O parameters prtwf 0 prtden 0 prteig 0 ## After modifying the following section, one might need to regenerate the pickle database with runtests.py r #%%<BEGIN TEST_INFO> #%% [setup] #%% executable = abinit #%% [files] #%% files_to_test = #%% tpaw1_1.out, tolnlines= 0, tolabs= 0.000e+00, tolrel= 0.000e+00 #%% psp_files = 6c.pspnc #%% [paral_info] #%% max_nprocs = 4 #%% [extra_info] #%% authors = M. Torrent #%% keywords = PAW #%% description = #%% Input for PAW1 tutorial #%% Diamond at experimental volume #%%<END TEST_INFO>
Now, open the tpaw1_1.files file and change the last line; replace the 6c.pspnc file with 6c.lda.atompaw. Run the code again:
abinit < tpaw1_1.files > log 2> err &
Your run should stop almost immediately!
The input file, indeed, is missing the mandatory argument pawecutdg!!
Add the line:
pawecutdg 50
to tpaw1_1.in and run it again. Now the code completes successfully.
Note
The time needed for the PAW run is greater than the time needed for the NormConserving PseudoPotential run; indeed, at constant value of plane wave cutoff energy ecut PAW requires more computational resources:
 the onsite contributions have to be computed,
 the nonlocal contribution of the PAW dataset uses 2 projectors per angular momentum, while the nonlocal contribution of the Present NormConserving Pseudopotential uses only one.
However, as the plane wave cutoff energy required by PAW is much smaller than the cutoff needed for the NormConserving PseudoPotential (see next section), a PAW calculation will actually require less CPU time.
Let’s open the output file (tpaw1_1.out) and have a look inside. Compared to an output file for a NormConserving PseudoPotential run, an output file for PAW contains the following specific topics:
 At the beginning of the file, some specific default PAW input variables (ngfftdg, pawecutdg, and useylm), mentioned in the section:
outvars: echo values of preprocessed input variables 
 The use of two FFT grids, mentioned in:
Coarse grid specifications (used for wavefunctions): getcut: wavevector= 0.0000 0.0000 0.0000 ngfft= 18 18 18 ecut(hartree)= 15.000 => boxcut(ratio)= 2.17276 Fine grid specifications (used for densities): getcut: wavevector= 0.0000 0.0000 0.0000 ngfft= 32 32 32 ecut(hartree)= 50.000 => boxcut(ratio)= 2.10918
 A specific description of the PAW dataset (you might follow the tutorial PAW2, devoted to the building of the PAW atomic data, for a complete understanding of the file):
Pseudopotential format is: paw4 basis_size (lnmax)= 4 (lmn_size= 8), orbitals= 0 0 1 1 Spheres core radius: rc_sph= 1.50000000 4 radial meshes are used:  mesh 1: r(i)=AA*[exp(BB*(i1))1], size= 505 , AA= 0.21824E02 BB=0.13095E01  mesh 2: r(i)=AA*[exp(BB*(i1))1], size= 500 , AA= 0.21824E02 BB=0.13095E01  mesh 3: r(i)=AA*[exp(BB*(i1))1], size= 530 , AA= 0.21824E02 BB=0.13095E01  mesh 4: r(i)=AA*[exp(BB*(i1))1], size= 644 , AA= 0.21824E02 BB=0.13095E01 Shapefunction is SIN type: shapef(r)=[sin(pi*r/rshp)/(pi*r/rshp)]**2 Radius for shape functions = sphere core radius Radial grid used for partial waves is grid 1 Radial grid used for projectors is grid 2 Radial grid used for (t)core density is grid 3 Radial grid used for Vloc is grid 4 Radial grid used for pseudo valence density is grid 4
 After the SCF cycle section:
The value of the integrated compensation charge evaluated by two different
numerical methodologies; 1 computed in the augmentation regions
on the “spherical” grid, 2 computed in the whole simulation cell on the
“FFT” grid…
A discussion on these two values will be done in a forthcoming section.
PAW TEST: ==== Compensation charge inside spheres ============ The following values must be close to each other ... Compensation charge over spherical meshes = 0.413178580356274 ```Compensation charge over fine fft grid = 0.413177280314290
 Information concerning the nonlocal term (pseudopotential strength D_{ij}) and the spherical density matrix (augmentation wave occupancies \rho_{ij}):
==== Results concerning PAW augmentation regions ==== Total pseudopotential strength Dij (hartree): Atom # 1 ... Atom # 2 ... Augmentation waves occupancies Rhoij: Atom # 1 ... Atom # 2 ...
 At the end of the file we find the decomposition of the total energy both by direct calculation and double counting calculation:
 Components of total free energy (in Hartree) : Kinetic energy = 6.40164318808980E+00 Hartree energy = 9.63456708252837E01 XC energy = 3.53223656186138E+00 Ewald energy = 1.27864121210521E+01 PspCore energy = 5.41017918797015E01 Loc. psp. energy= 5.27003595856857E+00 Spherical terms = 2.15689044331394E+00 >>>>> Internal E= 1.15256763830284E+01 "Doublecounting" decomposition of free energy: Band energy = 6.87331579398577E01 Ewald energy = 1.27864121210521E+01 PspCore energy = 5.41017918797015E01 DbleC XCenergy= 1.22161340385476E01 Spherical terms = 8.97688814082645E02 >>>>> Internal E= 1.15256701638793E+01 >Total energy in eV = 3.13629604304723E+02 >Total DC energy in eV = 3.13629435073068E+02
Note
The PAW total energy is not the equal to the one obtained in the NormConserving PseudoPotential case: in the NormConserving PseudoPotential case, the energy reference has been arbitrarily modified by the pseudopotential construction procedure. Comparing total energies computed with different PAW potentials is more meaningful: most of the parts of the energy are calculated exactly, and in general you should be able to compare numbers for (valence) energies between different PAW potentials or different codes.
3. Convergence with respect to the planewave basis cutoff¶
As in the usual NormConserving PseudoPotential case, the critical convergence parameter is the cutoff energy defining the size of the planewave basis.
3.a. Convergence with respect to ecut in the NormConserving PseudoPotential case¶
The input file tpaw1_2.in contains data to be used to compute the convergence in ecut
for diamond (at experimental volume). There are 9 datasets, with increasing ecut values
from 8 Ha to 24 Ha.
You might use the tpaw1_2.files file (with a standard NormConserving
PseudoPotential), and run:
abinit < tpaw1_2.files > log 2> err &
You should obtain the following total energy values (see tpaw1_2.out):
etotal1 1.1628880677E+01 etotal2 1.1828052470E+01 etotal3 1.1921833945E+01 etotal4 1.1976374633E+01 etotal5 1.2017601960E+01 etotal6 1.2046855404E+01 etotal7 1.2062173253E+01 etotal8 1.2069642342E+01 etotal9 1.2073328672E+01
You can check that the etotal convergence (at the 1 mHartree level) is not achieved for ecut = 24 Hartree.
3.b. Convergence with respect to ecut in the PAW case¶
Use the same input files as in section 1.a.
Again, modify the last line of tpaw1_2.files, replacing the 6c.pspnc file by 6c.lda.atompaw.
Run the code again and open the output file. You should obtain the values:
etotal1 1.1474828697E+01 etotal2 1.1518675625E+01 etotal3 1.1524581240E+01 etotal4 1.1525548758E+01 etotal5 1.1525741818E+01 etotal6 1.1525865084E+01 etotal7 1.1525926864E+01 etotal8 1.1525947400E+01 etotal9 1.1525954817E+01
You can check that:

The etotal convergence (at 1 mHartree) is achieved for 12 <= ecut <= 14 Hartree (etotal4 is within 1 mHartree of the final value);

The etotal convergence (at 0.1 mHartree) is achieved for 16 <= ecut <= 18 Hartree (etotal6 is within 0.1 mHartree of the final value).
With the same input parameters, for diamond, a PAW calculation needs a lower cutoff, compared to a calculation with NCPPs.
4. Convergence with respect to the double grid FFT cutoff¶
In a NCPP calculation, the plane wave density grid should be (at least) twice bigger
than the wavefunctions grid, in each direction.
In a PAW calculation, the plane wave density grid is tunable
thanks to the input variable pawecutdg (PAW: ECUT for Double Grid). This
is mainly needed to allow the mapping of densities and potentials, located
in the augmentation regions (spheres), onto the global FFT grid.
The number of points of the Fourier grid located in the spheres must be large
enough to preserve a minimal accuracy. It is determined from the cutoff energy
pawecutdg. An alternative is to use directly the input variable
ngfftdg. One of the most sensitive objects affected by this “grid
transfer” is the compensation charge density; its integral over the
augmentation regions (on spherical grids) must cancel with its integral over
the whole simulation cell (on the FFT grid).
Use now the input file tpaw1_3.in and the associated tpaw1_3.files file.
The only difference with the tpaw1_2.in file is that ecut is fixed to 12
Ha, while pawecutdg runs from 12 to 39 Ha.
tpaw1_3.in tpaw1_3.out tpaw1_3i tpaw1_3o tpaw1_3tmp ../../../Psps_for_tests/6c.lda.atompaw
# ======================================= # Input for PAW1 tutorial # Diamond at experimental volume # ======================================= #Datasets: convergence on pawecutdg ndtset 10 pawecutdg: 12. pawecutdg+ 3. #Cell and atoms definition acell 3*3.567 angstrom rprim 0.0 1/2 1/2 1/2 0.0 1/2 1/2 1/2 0.0 ntypat 1 natom 2 typat 2*1 xred 0.0 0.0 0.0 1/4 1/4 1/4 znucl 6 nband 6 #Basis definitions ecut 12. ecutsm 0.5 #SCF cycle parameters tolvrs 1.0d10 nstep 20 #Kpoints and sym nsym 0 occopt 1 ngkpt 6 6 6 nshiftk 4 shiftk 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.5 #I/O parameters prtwf 1 prtden 0 prteig 0 getwfk 1 ## After modifying the following section, one might need to regenerate the pickle database with runtests.py r #%%<BEGIN TEST_INFO> #%% [setup] #%% executable = abinit #%% [files] #%% files_to_test = #%% tpaw1_3.out, tolnlines= 0, tolabs= 0.000e+00, tolrel= 0.000e+00, fld_options=easy #%% psp_files = 6c.lda.atompaw #%% [paral_info] #%% max_nprocs = 4 #%% [extra_info] #%% authors = M. Torrent #%% keywords = PAW #%% description = #%% Input for PAW1 tutorial #%% Diamond at experimental volume #%%<END TEST_INFO>
Launch ABINIT with these files; you should obtain the values (file tpaw1_3.out):
etotal1 1.1524629595E+01 etotal2 1.1524595840E+01 etotal3 1.1524585370E+01 etotal4 1.1524580630E+01 etotal5 1.1524584720E+01 etotal6 1.1524583573E+01 etotal7 1.1524582786E+01 etotal8 1.1524582633E+01 etotal9 1.1524582213E+01 etotal10 1.1524582316E+01
We see that the variation of the energy wit respect to the pawecutdg parameter is well
below the 1 mHa level.
In principle, it should be sufficient to choose
pawecutdg = 12 Ha in order to obtain an energy change lower than 1 mHa.
In practice, it is better to keep a security margin. Here, for pawecutdg = 24 Ha
(5^{th} dataset), the energy change is lower than 0.001 mHa: this choice will be more than enough.
Note
Note the steps in the convergence. They are due to sudden changes in the grid size (see the output values for ngfftdg) which do not occur for each increase of pawecutdg. To avoid troubles due to these steps, it is better to choose a value of pawecutdg slightly higher.
The convergence of the compensation charge has a similar behaviour; it is possible to check it in the output file, just after the SCF cycle by looking at:
PAW TEST: ==== Compensation charge inside spheres ============ The following values must be close... Compensation charge over spherical meshes = 0.409392121335747 Compensation charge over fine fft grid = 0.409392418241149
The two values of the integrated compensation charge density must be close to each other. Note that, for numerical reasons, they cannot be exactly the same (integration over a radial grid does not use the same scheme as integration over a FFT grid).
Additional test:
We want now to check the convergence with respect to ecut with a fixed value pawecutdg = 24 Ha.
Let’s modify tpaw1_2.in file, setting pawecutdg to 24 Ha, and let’s launch ABINIT again.
You should obtain the values:
etotal1 1.1474831477E+01 etotal2 1.1518678975E+01 etotal3 1.1524584720E+01 etotal4 1.1525552267E+01 etotal5 1.1525745330E+01 etotal6 1.1525868591E+01 etotal7 1.1525930368E+01 etotal8 1.1525950904E+01 etotal9 1.1525958319E+01
You can check again that:
 The etotal convergence (at the 1 mHartree level) is achieved for 12 <= ecut <= 14 Hartree;
 The etotal convergence (at the 0.1 mHartree level) is achieved for 16 <= ecut <= 18 Hartree.
Note
Associated with the input variable pawecutdg is the input variable ngfftdg: it defines the size of the FFT grid associated with pawecutdg. Note that pawecutdg is only useful to define the FFT grid for the density in a convenient way. You can therefore tune directly ngfftdg to define the size of the FFT grid for the density.
Note
Although pawecutdg should always be checked, in practice, a common use it to put it bigger than ecut and keep it constant during all calculations. Increasing pawecutdg slightly changes the CPU execution time, but above all it is memoryconsuming. Note that, if ecut is already high, there is no need for a high pawecutdg.
Important
When testing ecut convergency, pawecutdg has to remain constant to obtain consistent results.
5. Plotting PAW contributions to the Density of States (DOS)¶
We now use the input file tpaw1_4.in and the associated tpaw1_4.files file.
ABINIT is used to compute the Density Of State (DOS)
(see the prtdos keyword in the input file).
Also note that more kpoints are used in order to increase the accuracy of the DOS.
ecut is set to 12 Ha, while pawecutdg is 24 Ha.
# ======================================= # Input for PAW1 tutorial # Diamond at experimental volume # ======================================= #Output of the total DOS prtdos 1 #Output of the projected DOS (with PAW contribs) # prtdos 3 pawprtdos 1 # natsph 1 iatsph 1 ratsph 1.5 #Cell and atoms definition acell 3*3.567 angstrom rprim 0.0 1/2 1/2 1/2 0.0 1/2 1/2 1/2 0.0 ntypat 1 natom 2 typat 2*1 xred 0.0 0.0 0.0 1/4 1/4 1/4 znucl 6 nband 6 #Basis definitions ecut 12. ecutsm 0.5 pawecutdg 24. #SCF cycle parameters tolwfr 1.0d12 nbdbuf 2 #nbdbuf:does not take care of empty bands nstep 10 #Kpoints and sym nsym 0 occopt 7 tsmear 0.005 ngkpt 10 10 10 nshiftk 4 shiftk 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.5 #I/O parameters prtwf 0 prtden 0 prteig 0 ## After modifying the following section, one might need to regenerate the pickle database with runtests.py r #%%<BEGIN TEST_INFO> #%% [setup] #%% executable = abinit #%% [files] #%% files_to_test = #%% tpaw1_4.out, tolnlines= 0, tolabs= 0.000e+00, tolrel= 0.000e+00, fld_options = easy #%% psp_files = 6c.lda.atompaw #%% [paral_info] #%% max_nprocs = 4 #%% [extra_info] #%% authors = M. Torrent #%% keywords = PAW #%% description = #%% Input for PAW1 tutorial #%% Diamond at experimental volume #%%<END TEST_INFO>
Launch the code with these files; you should obtain the tpaw1_4.out and the DOS file (tpaw1_4o_DOS):
abinit < tpaw1_4.files > log 2> err &
You can plot the DOS file if you want; for this purpose, use a graphical tool and plot column 3 with respect to column 2. If you use the xmgrace tool, launch:
xmgrace block tpaw1_4o_DOS bxy 1:2
At this stage, you have a usual Density of State plot; nothing specific to PAW.
Now, edit the tpaw1_4.in file, comment the “prtdos 1” line, and uncomment (or add):
prtdos 3 pawprtdos 1 natsph 1 iatsph 1 ratsph 1.5
prtdos 3 now requires the output of the projected DOS; natsph 1 iatsph 1 ratsph 1.5 selects the first carbon atom as the center of projection, and sets the radius of the projection area to 1.5 atomic units (this is exactly the radius of the PAW augmentation regions: generally the best choice). The pawprtdos 1 is specific to PAW. With this option, ABINIT should compute all the contributions to the projected DOS.
Let us remember that:
Within PAW, the total projected DOS has 3 contributions:
 The smooth planewaves (PW) contribution (from \tPsi\rangle),
 The allelectron onsite (AE) contribution (from \langle \tprj^a_i\tPsi\rangle \phi_i^a\rangle),
 The pseudo onsite (PS) contribution (from \langle \tprj^a_i\tPsi\rangle \tphi_i^a\rangle).
Launch ABINIT again (with the modified input file). You get a new DOS file, named tpaw1_4o_DOS_AT0001. You can edit it and look inside; it contains the 3 PAW contributions (mentioned above) for each angular momentum. In the diamond case, only l = 0 and l = 1 momenta are to be considered.
Now, plot the file, using the 7^{th}, 12^{th} and 17^{th} columns with respect to the 2^{nd} one; it plots the 3 PAW contributions for l = 0 (the total DOS is the sum of the three contributions). If you use the xmgrace tool, launch:
xmgrace block tpaw1_4o_DOS_AT0001 bxy 1:7 bxy 1:12 bxy 1:17
You should get this:
As you can see, the smooth PW contribution and the PS onsite contribution are close. At basis completeness,
they should cancel; we could approximate the DOS by the AE onsite part taken alone.
That’s exactly the purpose of the pawprtdos = 2 option; in that case, only the AE
onsite contribution is computed and given as a good approximation of the
total projected DOS. The main advantage of this option is that the computing time is
greatly reduced (the DOS is instantaneously computed).
However, as you will see in the next section, this approximation is only valid when:
 The \tphi_i basis is complete enough
 The electronic density is mainly contained in the sphere defined by ratsph.
6. Testing the completeness of the PAW partial wave basis¶
In the previous section we used a “standard” PAW dataset, with 2 partial waves per angular momentum. It is generally the best compromise between the completeness of the partial wave basis and the efficiency of the PAW dataset (the more partial waves you have, the longer the CPU time used by ABINIT is).
Let’s have a look at the $ABI_PSPDIR/6c.lda.atompaw file.
The 6^{th} line indicates the number of partial waves and their l angular momentum.
In the present file, 0 0 1 1
means two l = 0 partial waves, two l = 1 partial waves
.
Now, let’s open the $ABI_PSPDIR/6c.lda.test2proj.atompaw
and $ABI_PSPDIR/6c.lda.test6proj.atompaw files.
In the first file, only one partial wave per l is present; in the second one, 3 partial
waves per l are present.
The completeness of the partial wave basis increases when you use 6c.lda.test2proj.atompaw,
6c.lda.atompaw and 6c.lda.test6proj.atompaw.
Now, let us plot the DOS for the two new PAW datasets.
 Save the existing tpaw1_4o_DOS_AT0001 file, naming it f.i. tpaw1_4o_4proj_DOS_AT0001.
 Open the tpaw1_4.files file and modify it in order to use the 6c.lda.test2proj.atompaw PAW dataset.
 Launch ABINIT.
 Save the new tpaw1_4o_DOS_AT0001 file, naming it f.i. tpaw1_4o_2proj_DOS_AT0001.
 Open the tpaw1_4.files file and modify it in order to use the 6c.lda.test6proj.atompaw PAW dataset.
 Launch ABINIT again.
 Save the new tpaw1_4o_DOS_AT0001 file, naming it f.i. tpaw1_4o_6proj_DOS_AT0001.
Then, plot the contributions to the projected DOS for the two new DOS files. You should get:
Adding the DOS obtained in the previous section to the comparison, you immediately see that the superposition of the plane wave part DOS (PW) and the PS onsite DOS depends on the completeness of the partial wave basis!
Now, you can have a look at the 3 output files (one for each PAW dataset) for instance in a comparison tool. A way to estimate the completeness of the partial wave basis is to compare derivatives of total energy; if you look at the stress tensor:
For the 2 `partialwave` basis: 1.0866668849E03 1.0866668849E03 1.0866668849E03 0. 0. 0. For the 4 `partialwave` basis: 4.1504385879E04 4.1504385879E04 4.1504385879E04 0. 0. 0. For the 6 `partialwave` basis: 4.1469803037E04 4.1469803037E04 4.1469803037E04 0. 0. 0.
The 2 partialwave basis is clearly not complete; the 4 partialwave basis results are correct. Such a test is useful to estimate the precision we can expect on the stress tensor (at least due to the partial wave basis completeness).
You can compare other results in the 3 output files: total energy, eigenvalues, occupations…
Note: if you want to learn how to generate PAW datasets with different partial wave basis, you might follow the tutorial on generating PAW datasets(PAW2).
7. Checking the validity of PAW results¶
The validity of our computation has to be checked
by comparison, on known structures, with known results.
In the case of diamond, lots of computations and experimental results exist.
Very important remark: the validity of PAW calculations (completeness of plane wave basis and
partial wave basis) should always be checked by comparison
with allelectrons computations or with other existing PAW results;
it should not be done by comparison with experimental results.
As the PAW method has the same accuracy than allelectron methods, results should be very close.
In the case of diamond, allelectron results can be found f.i. in [Holzwarth1997]. Allelectron equilibrium parameters for diamond (within Local Density Approximation) obtained with the FPLAPW WIEN2K code are:
a0 = 3.54 angstrom B = 470 GPa
Experiments give:
a0 = 3.56 angstrom B = 443 GPa
Let’s test with ABINIT. We use now the input file tpaw1_5.in and the associated tpaw1_5.files file and we run ABINIT to compute values of _etotal for several cell parameters around 3.54 angstrom, using the standard PAW dataset.
# ======================================= # Input for PAW1 tutorial # Diamond: etotal vs acell curve around equilibrium # ======================================= #Datasets: etotal vs acell ndtset 7 acell: 3*3.52 angstrom acell+ 3*0.005 angstrom #Cell and atoms definition acell 3*3.567 angstrom rprim 0.0 1/2 1/2 1/2 0.0 1/2 1/2 1/2 0.0 ntypat 1 natom 2 typat 2*1 xred 0.0 0.0 0.0 1/4 1/4 1/4 znucl 6 nband 6 #Basis definitions ecut 12. ecutsm 0.5 pawecutdg 24. #SCF cycle parameters tolvrs 1.0d10 nstep 10 #Kpoints and sym nsym 0 occopt 1 ngkpt 10 10 10 nshiftk 4 shiftk 0.5 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.5 #I/O parameters prtwf 1 prtden 0 prteig 0 getwfk 1 ## After modifying the following section, one might need to regenerate the pickle database with runtests.py r #%%<BEGIN TEST_INFO> #%% [setup] #%% executable = abinit #%% [files] #%% files_to_test = #%% tpaw1_5.out, tolnlines= 0, tolabs= 0.000e+00, tolrel= 0.000e+00, fld_options = easy #%% psp_files = 6c.lda.atompaw #%% [paral_info] #%% max_nprocs = 4 #%% [extra_info] #%% authors = M. Torrent #%% keywords = PAW #%% description = #%% Input for PAW1 tutorial #%% Diamond: etotal vs acell curve around equilibrium #%%<END TEST_INFO>
abinit < tpaw1_5.files > log 2> err &
From the tpaw1_5.out file, you can extract the 7 values of acell and 7 values of etotal, then put them into a file and plot it with a graphical tool. You should get:
From this curve, you can extract the cell values of a_0 and B (with the method of your choice, for example by a BirchMurnhagan spline fit). You get:
a0 = 3.535 angstrom B = 465 GPa
These results are in excellent agreement with FPLAPW ones!
8. Additional comments about PAW in ABINIT¶
8.a. Overlap of PAW augmentation regions¶
In principle, the PAW formalism is only valid for nonoverlapping augmentation spherical regions. But, in usual cases, a small overlap between spheres is acceptable. By default, ABINIT checks that the distances between atoms are large enough to avoid overlap; a “small” voluminal overlap of 5% is accepted by default. This value can be tuned with the pawovlp input keyword. The overlap check can even be bypassed with pawovlp=1 (not recommended!).
Warning
While a small overlap can be acceptable for the augmentation regions, an overlap of the compensation charge densities has to be avoided. The compensation charge density is defined by a radius (named r_{shape} in the PAW dataset) and an analytical shape function. The overlap related to the compensation charge radius is checked by ABINIT and a WARNING is eventually printed.
Also note that you can control the compensation charge radius and shape function while generating the PAW dataset (see tutorial on generating PAW datasets(PAW2)).
8.b. Mixing scheme for the SelfConsistent cycle; decomposition of the total energy¶
The use of an efficient mixing scheme in the selfconsistent loop is a crucial point to minimize the number of steps to achieve convergence. This mixing can be done on the potential or on the density. By default, in a NormConserving PseudoPotential calculation, the mixing is done on the potential; but, for technical reasons, this choice is not optimal for PAW calculations. Thus, by default, the mixing is done on the density when PAW is activated. The mixing scheme can be controlled by the iscf variable (see the different options of this input variable). To compare both schemes, you can edit the tpaw1_1.in file and try iscf = 7 or 17 and compare the behaviour of the SC cycle in both cases; as you can see, the final total energy is the same but the way to reach it is completely different.
Now, have a look at the end of the file and focus on the Components of totalfree energy
; the total energy is decomposed according to two different schemes (direct
and double counting
);
at very high convergence of the SCF cycle the potential/density
residual is very small and these two values should be the same. But it has been observed that
the converged value was reached more rapidly by the direct
energy, when the
mixing is on the potential, and by the double counting
energy when the mixing
is on the density. Thus, by default, in the output file is to print the direct
energy when the mixing is on the potential, and the double counting energy
when the mixing is on the density.
Also note that PAW partial waves occupancies \rho_{ij} also are mixed during the SC cycle; by default, the mixing is done in the same way as the density.
8.c. PAW+U for correlated materials¶
If the system under study contains strongly correlated electrons, the DFT+U
method can be useful. It is controlled by the usepawu, lpawu,
upawu and jpawu input keywords.
Note that the formalism implemented in ABINIT is approximate, i.e. it is only valid if:
 The \tphi^a_i basis is complete enough;
 The electronic density is mainly contained in the PAW sphere.
The approximation done here is the same as the one explained in the 5^{th} section of this tutorial: considering that smooth PW contributions and PS onsite contributions are closely related, only the AE onsite contribution is computed; it is indeed a very good approximation.
Converging a SelfConsistent Cycle, or ensuring the global minimum is reached, with PAW+U is sometimes difficult. Using usedmatpu and dmatpawu can help.
8.d. Printing volume for PAW¶
If you want to get more detailed output concerning the PAW computation, you
can use the pawprtvol input keyword.
It is particularly useful to print details about pseudopotential strength
D_{ij} or partial waves occupancies \rho_{ij}.
8.e. Additional PAW input variables¶
Looking at the PAW variable set, you can find the description of additional input keywords related to PAW. They are to be used when tuning the computation, in order to gain accuracy or save CPU time.
Warning
In a standard computation, these variables should not be modified!
Variables that can be used to gain accuracy (in ascending order of importance)
 pawxcdev
 Control of the accuracy of exchangecorrelation onsite potentials (try pawxcdev = 2 to increase accuracy).
 mqgriddg
 Control of the accuracy of spline fits to transfer densities/potentials from FFT grid to spherical grid.
 pawnzlm
 Control how the symmetries are applied to compute the moments of spherical densities.
Variables that can be used to save memory (in ascending order of importance)
 pawstgylm
 Control of the storage of spherical harmonics around atoms.
 pawmixdg
 Control the grid used to mix the potential/density during SCF cycles.
 pawlcutd
 Control of the number of angular momenta taken into account in onsite densities.
 pawlmix
 Control of the number of \rho_{ij} components to be mixed during SCF cycle.
Variables that can be used to save CPU time (in ascending order of importance)
 pawnhatxc
 Control of the numerical treatment of gradients in case of GGA.
 pawstgylm
 Control of the storage of spherical harmonics around atoms.
 pawlcutd
 Control of the number of angular momenta taken into account in onsite densities.
 pawlmix
 Control of the number of \rho_{ij} components to be mixed during SCF cycle.
 bxctmindg
 Can be used to decrease the size of fine FFT grid for a given value of pawecutdg.
Note
The above list is not exhaustive. Several other keywords can be used to tune ABINIT PAW calculations.