Author Topic: Problem while running parallel MPI run for ridft calculation with Hybrid MetaGGA  (Read 344 times)

skundu07

  • Newbie
  • *
  • Posts: 7
  • Karma: +0/-0
Hello - I would like to perform a ridft run with M06-2X Functional.

Following key word I am using in my control file (attached)
$dft
   functional xcfun set-mgga
   functional xcfun m06x2x 1.0
   functional xcfun set-hybrid 0.2
   grid m3

I am running the job in serial (one core only) and parallel (28 core).

For Serial One Core Calculation methodology - job is running successfully. Working Fine

For MPI Parallel Running (28 Core) -- NOT OKAY. Following error is showing-

========================
 internal module stack:
------------------------
    ridft
    riscf
========================

========================
 internal module stack:
------------------------
    ridft
    riscf
========================

 fatal error for MPI rank   25

  abnormal termination
 ridft ended abnormally
 ridft ended abnormally

Abort(-16) on node 25 (rank 25 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -16) - process 25
 fatal error for MPI rank   12
[cli_25]: readline failed
program stopped.
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source             
ridft_mpi          0000000003839094  Unknown               Unknown  Unknown
libpthread-2.17.s  00002AAAB21DE370  Unknown               Unknown  Unknown
libmkl_avx2.so     00002AAAC08605AD  mkl_blas_avx2_dge     Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source             
ridft_mpi          0000000003839094  Unknown               Unknown  Unknown
libpthread-2.17.s  00002AAAB21DE370  Unknown               Unknown  Unknown
libmkl_avx2.so     00002AAAC0867DF2  mkl_blas_avx2_dge     Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source             
ridft_mpi          0000000003839094  Unknown               Unknown  Unknown
libpthread-2.17.s  00002AAAB21DE370  Unknown               Unknown  Unknown
libmkl_avx2.so     00002AAAC0867E47  mkl_blas_avx2_dge     Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source             
ridft_mpi          0000000003839094  Unknown               Unknown  Unknown
libpthread-2.17.s  00002AAAB21DE370  Unknown               Unknown  Unknown
libmkl_avx2.so     00002AAAC0867E11  mkl_blas_avx2_dge     Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source             
ridft_mpi          0000000003839094  Unknown               Unknown  Unknown
libpthread-2.17.s  00002AAAB21DE370  Unknown               Unknown  Unknown
libmkl_avx2.so     00002AAAC0867DB7  mkl_blas_avx2_dge     Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)

-------------------------------------------------------

CONTROL FILE:

$title
$symmetry c1
$user-defined bonds    file=coord
$coord    file=coord
$optimize
 internal   off
 redundant  off
 cartesian  on
 global     off
 basis      off
$atoms
si 1-5                                                                         \
   basis =si def2-TZVPD                                                        \
   jbas  =si universal
o  6-9                                                                         \
   basis =o def2-TZVPD                                                         \
   jbas  =o universal
h  10-21                                                                       \
   basis =h def2-TZVPD                                                         \
   jbas  =h universal
$basis    file=basis
$scfmo   file=mos
$closed shells
 a       1-57                                   ( 2 )
$scfiterlimit       6000
$thize     0.10000000E-04
$thime        5
$scfdamp   start=0.300  step=0.050  min=0.100
$scfintunit
 unit=30       size=0        file=twoint
$scfdiis
$maxcor    500 MiB  per_core
$scforbitalshift  automatic=.30
$drvopt
   cartesian  on
   basis      off
   global     off
   hessian    on
   dipole     on
   nuclear polarizability
$interconversion  off
   qconv=1.d-7
   maxiter=25
$coordinateupdate
   dqmax=0.3
   interpolate  on
   statistics    5
$forceupdate
   ahlrichs numgeo=0  mingeo=3 maxgeo=4 modus=<g|dq> dynamic fail=0.3
   threig=0.005  reseig=0.005  thrbig=3.0  scale=1.00  damping=0.0
$forceinit on
   diag=default
$energy    file=energy
$grad    file=gradient
$forceapprox    file=forceapprox
$dft
   functional xcfun set-mgga
   functional xcfun m06x2x 1.0
   functional xcfun set-hybrid 0.2
   grid m3
$scfconv   7
$jbas    file=auxbasis
$ricore      0
$rij
$rundimensions
   natoms=21
   nbf(CAO)=537
   nbf(AO)=483
$last step     define
$end


How do I resolve the parallelization issue for ridft calculation with hybrid meta-gga functional?

Thank
Subrata

uwe

  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 471
  • Karma: +0/-0
Hello,

self-defined combinations of functionals are not supported in the MPI version at the moment. Do you run the job on several nodes, or on one machine only?

Regards,

uwe

skundu07

  • Newbie
  • *
  • Posts: 7
  • Karma: +0/-0
Thanks Dr. UWE for your kind reply. I am running my job in Single Node with 16 cores. I would like to use M06-2x hybrid functional and it is not available in the libxc.

Do you have any suggestion regarding that?

Thanks
Subrata

uwe

  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 471
  • Karma: +0/-0
Hello,

yes, if you use Turbomole 7.4.1, just set:

export PARA_ARCH=SMP
export TM_PAR_OMP=yes
source $TURBODIR/Config_turbo_env

This will invoke the OpenMP version of ridft which is able to use arbitrary functionals and combinations of functionals from XCFun or libxc in the parallel SMP version.

For older versions of Turbomole you can set
export TM_PAR_FORK=yes
instead.

Note for other readers: To use the 'usual' M06-2X functional, it is sufficient to set:

$dft
   functional m06-2x

This will also work with the default parallel ridft version.

The input

$dft
   functional xcfun set-mgga
   functional xcfun m06x2x 1.0
   functional xcfun set-hybrid 0.2

includes only the exchange part of M06-2X and adds 20% exact HF exchange to it. This is not M06-2X as the correlation part is missing and the HF exchange for this functional is 54%.

Just to avoid any misunderstandings for those who read this post, searching for a way to use M06-2X in Turbomole...

Regards,

Uwe