Recent Posts

Pages: 1 2 [3] 4 5 ... 10
21
Riper / Plotting orbitals not working
« Last post by luyj on September 27, 2017, 01:10:52 pm »
Hello

I tried to plot some orbitals of a 1D periodic molecule. Following the advise from the manual, I modified the control file. The calculation seemed to have finished without error. Sadly the cub files were not created in the calculation folder.

Quote
             +--------------------------------------------------+
              |         Calculation of quantities on grid        |
              +--------------------------------------------------+
             
 Creating .cub files
   *grid construction*
   *plotting orbital     1 of     5*
   calculating values on grid points
   writing orbital values to   orb_k_1_1_1_a_1_real.cub
   *plotting orbital     2 of     5*
   calculating values on grid points
   writing orbital values to   orb_k_1_1_1_a_2_real.cub
   *plotting orbital     3 of     5*
   calculating values on grid points
   writing orbital values to   orb_k_1_1_1_a_3_real.cub
   *plotting orbital     4 of     5*
   calculating values on grid points
   writing orbital values to   orb_k_1_1_1_a_4_real.cub
   *plotting orbital     5 of     5*
   calculating values on grid points
   writing orbital values to   orb_k_1_1_1_a_5_real.cub
Quote
   ****  riper : all done  ****

Maybe I did something wrong since the example is only for 3D systems.

My input:

Code: [Select]
$periodic 1
$lattice angs
     8.6727102444608981   -0.0006344509069820    0.0076449746608829
    13.0074129090911157   22.3641390786160628    0.0134372830829742
     0.0238619607550042    0.0023876986523183   27.0526534211542611
$kpoints
 nkpoints 1
$riper
 sigma 0.01
$scfconv   6
$scfdamp   start=0.700  step=0.050  min=0.050
$disp3 bj
$restart   off
$closed shells
 a       1-154                                  ( 2 )
$energy    file=energy
$grad    file=gradient
$last step     riper
$maxcor 1387
$pointvalper fmt=cub
 orbs 5
 k 1 1 1 a 1 r
 k 1 1 1 a 2 r
 k 1 1 1 a 3 r
 k 1 1 1 a 4 r
 k 1 1 1 a 5 r

Hopefully you can help me.
22
Installing the Program / Re: Newly installed Turbomole running slow
« Last post by mariavd on September 15, 2017, 11:10:21 pm »
Hello,

Thank you for the quick reply. Turbomole runs fine on the CSC cluster. I had to set it up on another cluster at the university. It has about a dozen free nodes at the moment, so I guess that if the SLURM system is set up correctly, it should send the jobs to CPUs from the same node.

Yes, I allocate all CPUs from the nodes. The Xeon E5-2620 v3 is an octacore CPU with hyper-threading, so the script requests 16 tasks per node:

#SBATCH --nodes 2         # for SMP only 1 is possible
#SBATCH --ntasks-per-node=16 # Tasks per node
#SBATCH --ntasks 32      # total number of cores (processes)

The $profile option for a simple ridft run on the cyclopentadienyl cation returned the following:

    dscf profiling
  --------------------------------------------------------------------
             module   cpu total (s)       %  wall total (s)       %

          dscf.total                   1.2  100.00            18.8  100.00
        dscf.prepare                 0.1    5.60             0.8    4.27
      prepare.oneint               0.0    0.72             0.1    0.68
     prepare.moinput             0.0    1.43             0.2    1.30
     prepare.orthmos             0.0    0.52             0.1    0.30
            dscf.scf                   1.1   93.88            17.8   94.68
             scf.pre                   0.0    0.08             0.1    0.42
        scf.makedmat              0.0    0.22             0.1    0.38
          scf.shlupf                  0.8   65.94            12.5   66.40
         dscf.shloop                 0.8   63.94            10.7   56.77
          scf.symcar                 0.0    0.11             0.0    0.00
        scf.makefock               0.0    0.17             0.3    1.45
          scf.energy                 0.0    0.01             0.0    0.00
          scf.pardft                  0.3   21.02             3.8   20.21
        dft_grid_con                0.0    0.82             0.0    0.04
          scf.newerg                0.0    0.00             0.0    0.00
          scf.newcnv                0.0    0.35             0.1    0.72
           scf.fdiag                   0.0    1.52             0.0    0.25
         diag_tritrn                  0.0    0.29             0.0    0.02
          diag_rdiag                 0.0    1.14             0.0    0.20
          scf.modump              0.0    2.70             0.5    2.77
            scf.post                   0.0    1.58             0.2    1.14
        dscf.postscf                 0.0    0.48             0.2    1.04
 


    ------------------------------------------------------------------------
         total  cpu-time :   1.25 seconds
         total wall-time :  20.15 seconds
    ------------------------------------------------------------------------

The difference between the wall clock time and CPU time is huge, which makes me think that there is some communication delay between separate nodes. I will first contact the IT support at the department, and if needed, the Turbomole support, too.

Cheers,
Maria
23
Installing the Program / Re: Newly installed Turbomole running slow
« Last post by uwe on September 15, 2017, 04:10:02 pm »
Hi,

hm, was that on the SLURM cluster at the CSC in Espoo? Did you allocate all cores on the nodes?

To figure out what happens it could be useful to see the timings of the individual steps. If you add $profile to the control file and run the job, you will get detailed timings at the end of the output.

If things do not change even if all CPUs on the nodes are used, please contact the Turbomole support to get help.

Regards,

Uwe

24
Installing the Program / Newly installed Turbomole running slow
« Last post by mariavd on September 15, 2017, 03:08:27 pm »
Hello,

I installed Turbomole 7.1 on a compute cluster, and set up the environment variables according to the README. The TTEST completed successfully. A sample single-point calculation on benzene works fine, it even completed faster compared to the same control file and the same SLURM job script. However, when I ran dscf on a large molecule, the SCF iterations are very slow. On the other cluster dscf finished in 8 minutes but here it timed out 2 hours after submission, and only 4 iterations were completed. I requested 2 nodes, 32 cores in both cases, and the CPUs in both clusters are the same (Xeon E5-2620 v3). In the output, dscf reports that it is running on 32 processors in both cases.

What is the more likely option - that the SLURM system is not set up properly, or that I missed something when setting up Turbomole? I was given the binaries, so I do not know what compiler flags were used.
25
Riper / Re: Visualizing the riper output
« Last post by turbomaster on September 12, 2017, 08:54:26 pm »
TmoleX 4.3 can visualise the riper output.
26
Riper / Visualizing the riper output
« Last post by luyj on September 08, 2017, 06:01:06 pm »
Hello,

I am looking for a convenient way to visualize the output of riper from e.g. an optimization of cell parameters. The scripts t2x and tm2molden do not include the cell.

Something that can be read with VESTA or ASE would be ideal.
27
Statpt / Re: Statpt segmentation fault for systems > 500 atoms
« Last post by mgt16 on September 07, 2017, 07:56:14 pm »
This solved my issue right away!
Thanks!
Michael.
28
Statpt / Re: Statpt segmentation fault for systems > 500 atoms
« Last post by uwe on September 07, 2017, 11:08:18 am »
Hello,

one usually gets 'Segmentation fault' messages if the (user or system) memory limits are too low, or if you ask for more memory than your machine really has.

Since the statpt module  is not memory-consuming, I assume that you have hit a user limit (especially the stack size limit).

To check and resolve this, please see:

http://www.turbo-forum.com/index.php/topic,23.0.html  -- I get "Segmentation Fault", SIGSEGV or "Memory fault"

If that does not help, just contact the Turbomole support team to get help (send an email to turbomole at cosmologic.de).

Regards,

Uwe
29
Statpt / Statpt segmentation fault for systems > 500 atoms
« Last post by mgt16 on September 06, 2017, 10:50:34 pm »
All,
I have been performing geometry optimizations of nanoclusters of more than 500 atoms. Statpt is being run by the jobex script. The jobex script is failing after the ridft energy is converged during the geometry optimization step and from what I can see the statpt might be running out of memory.

The GEO_OPT_FAILED file contains only a warning to look at the statpt.tmpout file - but the statpt.tmpout file ends abruptly with no uncommon error:
Code: [Select]
  norm of actual CARTESIAN gradient:  2.99848E+00

  ENERGY =  -23769.9570481000 a.u.; # of cycle =    1


 keyword $hessapprox missing in file <control>


  USING DIAGONAL INITIAL HESSIAN
  Diagonal elements set to:    0.50000

I found this not very useful so I looked to the jobex output file.

The jobex output file only contains this:
Code: [Select]
/ihome/crc/install/turbomole/7.02/TURBOMOLE/scripts/jobex: line 528: 149335 Segmentation fault      /ihome/crc/install/turbomole/7.02/TURBOMOLE/bin/em64t-unknown-linux-gnu_smp/statpt

I have run several similar systems before on our cluster - but have recently received this error for two important systems.
I would appreciate any help!
Thanks,
Michael.

30
TURBOMOLE Forum General / Re: wfn files
« Last post by Arnim on August 22, 2017, 09:59:24 am »
Hello,

you can run 'ridft -proper' to bypass the SCF cycle or the new proper tool.

Cheers,

Arnim
Pages: 1 2 [3] 4 5 ... 10