Author Topic: Parallel rdgrad with point charges  (Read 2628 times)

hmsenn

  • Newbie
  • *
  • Posts: 8
  • Karma: +0/-0
Parallel rdgrad with point charges
« on: March 06, 2014, 06:50:19 pm »
Dear all

We have run into the problem that SMP/GA-parallel rdgrad (in TM 6.4) is unable to deal with point charges ("Sorry, point charges in new parallel version not yet implemented!"). Using the MPI-parallel version is no help either because it does not work with hybrid functionals ("HF-exchange within rdgrad is not yet implemented in non-GA parallel mode").

Has either (or both) of these limitations in rdgrad been removed in TM 6.5?

Has anyone else met this problem and come up with a solution other than reverting to using non-RI dscf and grad?

Thanks for any answers or suggestions!

Hans Martin

uwe

  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 410
  • Karma: +0/-0
Re: Parallel rdgrad with point charges
« Reply #1 on: March 07, 2014, 09:36:04 am »
Hello Hans Martin,

in version 6.5 you can set

export PARA_ARCH=SMP
export PAR_TM_FORK=yes # this is wrong, see post by mpjohans below, should be TM_PAR_FORK=yes

and the usual PARNODES to set the number of cores to run a parallel ridft/rdgrad job with hybrid functionals, RI and point charges.

Regards,

Uwe
« Last Edit: March 02, 2015, 08:59:26 pm by uwe »

mpjohans

  • Full Member
  • ***
  • Posts: 26
  • Karma: +0/-0
    • .
Re: Parallel rdgrad with point charges
« Reply #2 on: March 02, 2015, 05:52:53 pm »
Hello All,

Just got into doing parallel gradients with point charges myself, and came across this thread.

First, the environment variable to set is TM_PAR_FORK, not PAR_TM_FORK. This doesn't really help, however, because the point charge gradients are not computed in parallel anyway, just as with the normal SMP/grad. The MPI version does seem to compute point charge gradients in parallel, though.

Just because I just benchmarked it, here's a small table on the performance of different parallel binaries with point charges. The example system has 126 atoms, 10353 point charges, and 1111 basis functions; control file includes:

$point_charges file=point_charges
$point_charge_gradients file=pc_gradient
$drvopt
   point charges

Run on a single 24 core node using the TM 6.6 binaries:

Code: [Select]
               SCF   gradient  total
SMP            3:25   7:43     11:08
MPI            2:11   1:14      3:25
SMP, RI        1:28   fail
SMP, RI, FORK  2:55  10:00     12:55

So the great thing would of course be to get parallel gradients into the normal SMP code. While waiting for that, MPI seems to be the way to go with plenty of point charges.