Recent Posts

Pages: [1] 2 3 ... 10
Ridft, Rdgrad, Dscf, Grad / Re: SCF Iterations in TURBOMOLE 7.2
« Last post by saleheen_noman on May 04, 2018, 06:56:01 pm »
Dr. Uwe

I looked into /proc/sys/fs/file-max which says the limit is 562454 and file-nr says 6400 files remain opened. Also /dev/shm is not full of old files. I am using the whole node myself. I don't know why it still says the same thing. Just to clarify, the problem seem to vanish with one combination of scforbitalshift and scfdamp, but I don't understand with a different combination why it doesn't perform iterations more than 1861 times. Do the number of open files exceed 562454 when a calculation reaches 1861 iteration steps?


Ridft, Rdgrad, Dscf, Grad / Re: SCF Iterations in TURBOMOLE 7.2
« Last post by uwe on May 04, 2018, 11:02:52 am »

the error message

pri_inittask  sem_open  n failed 23

indicates that you might have problems with I/O. Not hardware problems, but probably a too small limit for the number of open files. Or a  lot of (old) files in /dev/shm ?

Are there other jobs (also from other users) running on the same node?


Ridft, Rdgrad, Dscf, Grad / SCF Iterations in TURBOMOLE 7.2
« Last post by saleheen_noman on May 04, 2018, 04:33:14 am »

I was trying perform some ridft calculations using TURBOMOLE 7.2. I used to run these calculations using 6.X versions and had nothing to complain about. However, in version 7.2, in cases where the convergence is slow, my calculations get aborted after 1861 SCF iterations even when the limit is set to higher number (e.g. 3000), and it shows the following error message-

Code: [Select]
pri_inittask  sem_open  n failed 23
pri_inittask  sem_open  n failed 23
pri_inittask  sem_open  n failed 23
pri_inittask  sem_open  n failed 23
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source             
Unknown            0000000002951AEA  Unknown               Unknown  Unknown
Unknown            00002B9B7CBE47E0  Unknown               Unknown  Unknown
libpthread-2.12.s  00002B9B7CBE46AB  raise                 Unknown  Unknown        00002B9B7C4DB577  HPMPI_Abort           Unknown  Unknown
Unknown            000000000088E910  Unknown               Unknown  Unknown
Unknown            000000000088AD38  Unknown               Unknown  Unknown
Unknown            000000000087A75F  Unknown               Unknown  Unknown
Unknown            0000000000E5BA72  Unknown               Unknown  Unknown
Unknown            00000000004A2865  Unknown               Unknown  Unknown
Unknown            0000000000440D75  Unknown               Unknown  Unknown
Unknown            0000000000430F1E  Unknown               Unknown  Unknown       00002B9B7CE11D1D  __libc_start_main     Unknown  Unknown
Unknown            0000000000430E29  Unknown               Unknown  Unknown

The SIGTERM part of the error message repeats for a few more times. And it always crashes after 1861 steps, no matter what number I set for the scf iterations. I tried to play with the scforditalshift and scfdamp values, didn't help. Is it a bug in the code? Did anyone face any similar problems?

I'm adding my control file here in case anyone wants to have a look at it.

Code: [Select]
$operating system unix
$symmetry c1
$user-defined bonds    file=coord
$coord    file=coord
 internal   off
 redundant  off
 cartesian  on
 global     off
 basis      off
pt 1-51                                                                        \
   basis =pt ecp-60-mwb-SVP                                                    \
   ecp   =pt ecp-60-mwb                                                        \
   jbas  =pt ecp-60-mwb-SVP
c  52-53                                                                       \
   basis =c def2-SVP                                                           \
   jbas  =c def2-SVP
o  54-55                                                                       \
   basis =o def2-SVP                                                           \
   jbas  =o def2-SVP
h  56-61                                                                       \
   basis =h def2-SVP                                                           \
   jbas  =h def2-SVP
$basis    file=basis
$ecp    file=basis
$uhfmo_alpha   file=alpha
$uhfmo_beta   file=beta
$alpha shells
 a       1-484                                  ( 1 )
$beta shells
 a       1-468                                  ( 1 )
$scfiterlimit       3000
$thize     0.10000000E-04
$thime        5
 unit=30       size=0        file=twoint
$maxcor    500 MiB  per_core
   cartesian  on
   basis      off
   global     off
   hessian    on
   dipole     on
   nuclear polarizability
$interconversion  off
   interpolate  on
   statistics    5
   ahlrichs numgeo=0  mingeo=3 maxgeo=4 modus=<g|dq> dynamic fail=0.3
   threig=0.005  reseig=0.005  thrbig=3.0  scale=1.00  damping=0.0
$forceinit on
$energy    file=energy
$grad    file=gradient
$forceapprox    file=forceapprox
   functional pbe
   gridsize   m4
$scfconv   7
$scfdamp   start=1.500  step=0.050  min=0.100
$scforbitalshift  closedshell=0.5  automatic! 1.0
$ricore      500
$jbas    file=auxbasis
$last step     define

Thank you so much!

TMOLE Script / TMOLe mixed basis set
« Last post by kiranb on April 20, 2018, 09:09:19 pm »
How can I use a mixed basis set with tmole? 
Ricc2 / Geometry optimization of the lowest triplet state
« Last post by lasermichel on April 20, 2018, 01:19:24 pm »

I'm trying to optimize the lowest triplet state of 3-cyanoindole using CC2 (using spin component scaling) with the cc-pVTZ basis set. Control file reads:

  geoopt model=cc2       state=(a{3} 1)
  scs   cos= 1.20000   css= 0.33333
irrep=a multiplicity=3 nexc=2

There is an error in the gradient step: cc_parse_states> inconsistency in state input!

Also the energies of the lowest two triplets, which are given in the first run of ricc2 are both negative! What is going wrong here?

Connected to that problem: What I'm really interested in, ist the optimization of the second excited singlet state. While the first excited singlet was no problem, I always run into a conical intersection with the first state, when trying to optimize the second one. For other indoles I have a good guess about the structure of the second state, so that I can avoid the crossing by choosing a good starting geometry. This does not work here. A colleague gave me the advice that the structure of this second singlet state should be similar to that of the lowest triplet. Therefore I try to optimize the tríplet for a better idea about the second singlet structure. Any other ideas, of how to avoid running into the CI?

Thanks, Michael

Thank you for your answer.

Just to make sure I understand your answer correctly:

Including COSMO in an escf calculation makes only sense for a excitation energy calculation (e.g. $scfinstab rpas ), whereas for a stability analysis I should take the mos obtained from a COSMO SCF-calculation (e.g. DSCF) and perform a "vacuum" escf calculation on these orbitals by removing the cosmo keyword.
The non-real instabilities I obtained are therefore due to using the stability analysis in the wrong way.

Thank you in advance and kind regards,
Treatment of Solvation Effects with COSMO / Re: Non-real instabilities when using COSMO
« Last post by uwe on March 29, 2018, 11:50:27 am »

it is a bit unfortunate that escf does not print a warning in such cases, but an instability calculation is a ground-state property. COSMO is implemented for "vertical excitations and polarizabilities for TDDFT, TDA and RPA" (to cite the documentation).

If you want to calculate ground-state properties with escf, please remove the $cosmo keyword before calling escf (see COSMO section in the manual, although this is usually done if the 'fast term' should be neglected).

I guess a closing remark on this issue should be given by an expert for TDDFT (not me)...



Treatment of Solvation Effects with COSMO / Non-real instabilities when using COSMO
« Last post by simon on March 28, 2018, 04:11:33 pm »
Dear all,
I recently discovered, that my calculations using the b3-lyp or bh-lyp functional and COSMO have non-real instabilities ($scfinstab non-real).(despite successful singlet excitation calculations)
Using better SCF and density convergence criteria and a better grid had no effect. The problem occurs in both Turbomole 6.6 and 7.2

Further testing revealed, that this instabilities do not occur, when a vacuum single point calculation on the same geometry is performed indicating a problem with COSMO.

Furthermore, I tested this behaviour using CH4 and THF, as test molecules, and obtained the following results: Both molecules also have this instability. In case of THF I used 1.0, 1.01, 1.1, 2.379, 7.52 and 36.64 as epsilon value.  Only the epsilon=1.0 calculation had no instability.

In case of methane, I tested the effect of basis set: I used the def2-TZVP, cc-pVTZ, aug-pVTZ and cc-pV6Z basis set in all cases. All tests revealed non-real instabilities. The imaginary/negative eigenvalue seems to increase with increasing the basis set.

This non-real instabilities consist of a  large to very large amount of relatively small  contributions. The coefficient of the contribution decreases, the number of contributions increases with increasing the system size. All contributions consist of low lying occupied orbitals and very high lying unoccupied orbitals(see example below (CH4 and def2-TZVP basis)).

I therefore wonder:
Is the non-real instability test compatible with COSMO?
Can the excitation energies, which correspond quite well  to the experiment,  trusted despite these findings?
What could be a cause and potential solution for this problem be?

Kind regards,

escf.out for CH4(def2-TZVP/COSMO(epsilon=7.52)):
Code: [Select]
1st a eigenpair

 SCF energy hessian eigenvalue:          -164.2170926565011   


 Dominant contributions:

    occ. orbital  energy / eV   virt. orbital  energy / eV   |coeff.|^2*100
       2 a         -18.77          13 a          10.78           11.4
       2 a         -18.77          44 a          81.78            8.4
       5 a         -10.57          52 a         114.32            5.7
       4 a         -10.57          51 a         114.27            5.3
       3 a         -10.57          53 a         114.33            5.1
       5 a         -10.57          13 a          10.78            5.1
       4 a         -10.57          41 a          70.20            4.7
       5 a         -10.57          40 a          70.19            4.2
       3 a         -10.57          39 a          70.18            4.0
       2 a         -18.77           6 a           1.50            3.7
       4 a         -10.57          22 a          25.44            3.7
       5 a         -10.57          21 a          25.43            3.2
       2 a         -18.77          30 a          53.28            2.7
       3 a         -10.57          25 a          40.39            2.4
       5 a         -10.57          23 a          40.36            2.3
       3 a         -10.57          20 a          25.41            2.1
       4 a         -10.57          24 a          40.37            1.9
       3 a         -10.57          21 a          25.43            1.5
       5 a         -10.57          20 a          25.41            1.5
       2 a         -18.77          26 a          40.48            1.3
       2 a         -18.77          17 a          14.47            1.2
       5 a         -10.57          24 a          40.37            1.1
       2 a         -18.77          54 a         121.70            1.1
       5 a         -10.57           6 a           1.50            1.0
       5 a         -10.57          26 a          40.48            0.9
       4 a         -10.57          20 a          25.41            0.9
       3 a         -10.57          22 a          25.44            0.9
       5 a         -10.57          10 a           5.47            0.8
       3 a         -10.57          23 a          40.36            0.8
       4 a         -10.57          25 a          40.39            0.8
       4 a         -10.57          23 a          40.36            0.8

Parallel Runs / Re: problem with parallel mpi excution - jobex
« Last post by bishwanath on March 17, 2018, 04:26:46 pm »
Dear Users

I am having problems with the mpi-based parallel execution of jobex. After completion of the first SCF procedure (dscf module) the ricc2 module reports the following:

|      simplified algorithm for abelian symmetry will be applied      |

  internal module stack:

 too long file name in localfile.
 ricc2 ended abnormally
pbsdsh: task 6 exit status 13
pbsdsh: task 4 exit status 13
pbsdsh: task 14 exit status 13
pbsdsh: task 18 exit status 13
pbsdsh: task 2 exit status 13
pbsdsh: task 1 exit status 13
pbsdsh: task 7 exit status 13
pbsdsh: task 13 exit status 13
pbsdsh: task 5 exit status 255
pbsdsh: task 3 exit status 13
pbsdsh: task 10 exit status 13
pbsdsh: task 19 exit status 13
pbsdsh: task 12 exit status 13
pbsdsh: task 9 exit status 13
pbsdsh: task 17 exit status 13
pbsdsh: task 11 exit status 13
pbsdsh: task 8 exit status 13
pbsdsh: task 15 exit status 13
pbsdsh: task 16 exit status 13
error in gradient step (1)

Has anyone come across such a message? What is its meaning, and, more importantly, how to make the program work?
I'd be grateful for any help.
Marcin Andrzejak
Escf and Egrad / Subsequent Escf runs - How to?
« Last post by marcen on March 14, 2018, 10:56:15 am »
Hello Everyone,

I'm pretty new to the escf module of Turbomole, so I am trying to get familiar with it.
But I'm not really sure whether my calculation setup is correct and i hope someone can help me out.

Right now, I am calculating excited states to obtain CD and UV/VIS spectra. Afterwards, the calculated spectra are compared to experimentally derived spectra in order to identify the absolute configuration of a molecule.  Since my molecules have about 140k states to calculate, i split up the calculation in portions of 4k states (calculation of 140k states at once seems not bearable).
In the manual it is said: "In subsequent runs add more excitations until a converged result is reached. escf will keep the converged roots, so not much time is lost using this restart approach."

So now my question is:
Does this mean, that i need to change the number of states in the control file in each subsequent run, adding another 4k on top of that previous number, or is Turbomole able to recognize the already calculated states.
So as an example, i have calcualted the first 4k states in a first escf run with the following $soes-flag:
 a         4000

Can i leave the control file with the given $soes flag untouched for a second escf run, or do i need to change it to
 a         8000
in order to give Turbomole the information that is has to calculate another 4k states.

Thanks in advance,


Pages: [1] 2 3 ... 10