TURBOMOLE Modules => Jobex: Structure Optimization and Molecular Dynamics => Topic started by: marand on July 07, 2014, 02:08:10 pm

Title: Error in gradient step
Post by: marand on July 07, 2014, 02:08:10 pm
Dear Users and Developers

I have encountered a strange problem while trying to optimize geometry of a 27-atom organic molecule at the CC2/aug-cc-pVTZ level of theory. The symetry group is Cs.

Optimization crashes during the first step, whe after the initial dscf calculations the ricc2 module is envoked. The calculations are initialise and fail at this poit (the excerpt from job.last follows):

 total memory allocated for calculation of (Q|P)**(-1/2) : 33 MiB

     calculation of (P|Q) ...
     time in lpzwei        cpu:  0.17 sec    wall:  0.17 sec    ratio:  1.0

     calculation of the Cholesky decomposition of (P|Q)**(-1) ...
     time in invmetri      cpu: 23.74 sec    wall:  1.12 sec    ratio: 21.3

   threshold for RMS(d[D]) in SCF was     :  0.10E-06
   integral neglect threshold             :  0.28E-11
   derivative integral neglect threshold  :  0.10E-07

 setting up bound for integral derivative estimation

 increment for numerical differentiation : 0.00050000


     Energy of reference wave function is -1009.4912800250000
     Maximum orbital residual is           0.4573730786200E-04

     Number of symmetry-nonredundant auxiliary basis functions:     2303

     Block lengths for integral files:
        frozen occupied (BOI):        1 MiB
        active occupied (BJI):        1 MiB
        active virtual  (BAI):       17 MiB
        frozen virtual  (BGI):        0 MiB
               general  (BTI):       18 MiB


error in gradient step
I receive no further comment or error message.

I am using redundant internal coordinates and maximum of 3500 MB of core memory. The calculations run without correlating the core electrons.

I am guessing that there must be some default limit exceeded, because the same computations uwith smaller basis sets (aug-cc-pVDZ, cc-pVTZ) run smoothly. Please, point me to the possible reason for the crash of my calculations.

My best regards
Marcin Andrzejak
Title: Re: Error in gradient step
Post by: uwe on July 07, 2014, 07:21:25 pm

the 'error in gradient step' is a bit confusing since it does not stem from ricc2 itself, but from jobex which noticed an incomplete ricc2 run.

So the question is why ricc2 failed to run. In almost all cases the problem is caused by memory problems or limits:

Some hints are given here:

http://www.turbo-forum.com/index.php/topic,23.0.html (http://www.turbo-forum.com/index.php/topic,23.0.html)

If all that does not help, contact the support!


Title: Re: Error in gradient step
Post by: christof.haettig on July 30, 2014, 11:56:05 am
If ricc2 crashed without printing at the end an error message it could be a memory (stack size) limit that
is exceeded or it was an MPI parallel calculation and the error occured in one of the slave processes which then aborted all the other processes.

Check if there are any error messages from the operating system of the Fortran runtime environment in the error output of your calculation (or batch job).

For MPI parallel calculations check if any of the slave*.output files contains at the end an error message.

Title: Re: Error in gradient step
Post by: dbk80 on October 24, 2016, 02:57:12 pm

If I'm using MPI and see that slave4.output ends in "****  dscf : all done  ****" but the others (1 to 3) seems to be interrupted, i.e they end like that:

calculating CC ground state density

   a semicanonical algorithm will be used

    density nr.      cpu/min        wall/min    L     R
         1             2.97            2.97    L0     R0 
     time in cc_1den       cpu:  2 min 58 s  wall:  2 min 58 s  ratio:  1.0

What can I learn? where is the problem? why is my excited states optimization fails?
Title: Re: Error in gradient step
Post by: christof.haettig on November 17, 2016, 05:31:26 pm
dscf and ricc2 use a slightly different numbering for the output files. ricc2 write the output for the first slave to the terminal.
So this is normal.

Also the output you get in slave*ouput  for 1-3 from ricc2 is normal. It doesn't contain any error message.

Is there an error message from the first slave (terminal output)?  Any error messages from the system or MPI?