Author Topic: Known Bug in Turbomole version 6.4  (Read 3385 times)


  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 405
  • Karma: +0/-0
Known Bug in Turbomole version 6.4
« on: June 12, 2013, 09:14:48 am »

there is a bug in the MPI parallel version of dscf (non-RI DFT energy calculations) when all of the following points apply:

  • Version 6.4 of Turbomole is being used, older and newer versions are not affected
  • parallel MPI calculation (SMP version is not affected) - used if jobs run on more than one system using TCP/IP or Infiniband network
  • COSMO is switched on
  • DFT is used
  • DFT quadrature grid size is set to an m-value like m3 or m4

NOTE: Users who generate COSMO files as input for COSMO-RS do run by default RI-DFT and do not have to worry about this bug!

The total energy after convergence is wrong. To check that, just look in the slave1.output file after the parallel calculation and search for:


There is an additional iteration after this line. The exchange-correlation energy and the number of electrons are printed there, e.g.:

                            Exc =    -7.538658807111     N = 9.9993142890

If those values are significantly different from the values in the SCF iterations before, the numbers are wrong.

The density and the orbitals stored on disk and all properties calculated with this buggy version are correct. Geometry optimizations will not converge because the energy output of each geometry cycle will give different numbers. Only if single-point non-RI DFT energy calculations with COSMO did run in parallel over different nodes, and if you need the total energy itself, check your old calculations.

Turbomole version 6.5 does not have this bug and thus it is recommended to upgrade to the new version. Alternatively there is a patch for 6.4 available on the Turbomole download server. Ask your local Turbomole administrator or contact the Turbomole support team (

A simple shell script is attached which can be downloaded, made executable (chmod a+rx check_for_64_bug) and then run in a directory where the output of parallel Turbomole runs are located. It will check for the output of MPI parallel dscf (slave1.output) and prints out whether the job was buggy or not.

Sorry for the inconvenience,