You are on page 1of 2

TACC: Starting up job 5962364

TACC: Setting up parallel environment for MVAPICH2+mpispawn.


TACC: Starting parallel tasks...
running on 96 total cores
distrk: each k-point on 96 cores,
1 groups
distr: one band on
1 cores, 96 groups
using from now: INCAR
vasp.5.3.3 18Dez12 (build Jan 17 2013 09:57:53) complex
POSCAR found type information on POSCAR Zr O
POSCAR found : 2 types and
273 ions
----------------------------------------------------------------------------|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

W
W
W
W
W
W
W WW W
WW WW
W
W

AA
A A
A
A
AAAAAA
A
A
A
A

RRRRR
R
R
R
R
RRRRR
R R
R
R

N
N
NN N
N N N
N N N
N NN
N
N

II
II
II
II
II
II

N
N
NN N
N N N
N N N
N NN
N
N

!!!
G
G !!!
G
!!!
G GGG !
G
G
GGGG !!!

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

GGGG

For optimal performance we recommend that you set


NPAR = 4 - approx SQRT( number of cores)
(number of cores/NPAR must be integer)
This setting will greatly improve the performance of VASP for DFT.
The default NPAR=number of cores might be grossly inefficient
on modern multi-core architectures or massively parallel machines.
Do your own testing.
Unfortunately you need to use the default for hybrid, GW and RPA
calculations.
-----------------------------------------------------------------------------

----------------------------------------------------------------------------|
|
| ADVICE TO THIS USER RUNNING 'VASP/VAMP' (HEAR YOUR MASTER'S VOICE ...): |
|
|
|
You have a (more or less) 'large supercell' and for larger cells
|
|
it might be more efficient to use real space projection operators
|
|
So try LREAL= Auto in the INCAR file.
|
|
Mind: If you want to do a very accurate calculations keep the
|
|
reciprocal projection scheme
(i.e. LREAL=.FALSE.)
|
|
|
----------------------------------------------------------------------------LDA part: xc-table for Pade appr. of Perdew
POSCAR, INCAR and KPOINTS ok, starting setup
WARNING: small aliasing (wrap around) errors must be expected
FFT: planning ...
WAVECAR not read
initial charge from wavefunction
entering main loop
N
E
dE
d eps
ncg
rms
rms(c)
[c558-703.stampede.tacc.utexas.edu:mpispawn_2][readline] Unexpected End-Of-File
on file descriptor 8. MPI process died?
[c558-703.stampede.tacc.utexas.edu:mpispawn_2][mtpmi_processops] Error while rea
ding PMI socket. MPI process died?
[c558-801.stampede.tacc.utexas.edu:mpispawn_4][readline] Unexpected End-Of-File

on file descriptor 8. MPI process died?


[c558-801.stampede.tacc.utexas.edu:mpispawn_4][mtpmi_processops] Error while rea
ding PMI socket. MPI process died?
[c558-802.stampede.tacc.utexas.edu:mpispawn_5][readline] Unexpected End-Of-File
on file descriptor 18. MPI process died?
[c558-802.stampede.tacc.utexas.edu:mpispawn_5][mtpmi_processops] Error while rea
ding PMI socket. MPI process died?
[c557-504.stampede.tacc.utexas.edu:mpispawn_0][readline] Unexpected End-Of-File
on file descriptor 14. MPI process died?
[c557-504.stampede.tacc.utexas.edu:mpispawn_0][mtpmi_processops] Error while rea
ding PMI socket. MPI process died?
[c557-601.stampede.tacc.utexas.edu:mpispawn_1][readline] Unexpected End-Of-File
on file descriptor 20. MPI process died?
[c557-601.stampede.tacc.utexas.edu:mpispawn_1][mtpmi_processops] Error while rea
ding PMI socket. MPI process died?
[c558-704.stampede.tacc.utexas.edu:mpispawn_3][readline] Unexpected End-Of-File
on file descriptor 9. MPI process died?
[c558-704.stampede.tacc.utexas.edu:mpispawn_3][mtpmi_processops] Error while rea
ding PMI socket. MPI process died?
[c558-703.stampede.tacc.utexas.edu:mpispawn_2][child_handler] MPI process (rank:
43, pid: 2027) terminated with signal 11 -> abort job
[c558-801.stampede.tacc.utexas.edu:mpispawn_4][child_handler] MPI process (rank:
67, pid: 49956) terminated with signal 11 -> abort job
[c558-802.stampede.tacc.utexas.edu:mpispawn_5][child_handler] MPI process (rank:
85, pid: 78558) terminated with signal 11 -> abort job
[c557-601.stampede.tacc.utexas.edu:mpispawn_1][child_handler] MPI process (rank:
19, pid: 52347) terminated with signal 11 -> abort job
[c558-704.stampede.tacc.utexas.edu:mpispawn_3][child_handler] MPI process (rank:
55, pid: 122871) terminated with signal 11 -> abort job
[c557-504.stampede.tacc.utexas.edu:mpispawn_0][child_handler] MPI process (rank:
12, pid: 110497) terminated with signal 11 -> abort job
[c557-504.stampede.tacc.utexas.edu:mpirun_rsh][process_mpispawn_connection] mpis
pawn_2 from node c558-703 aborted: Error while reading a PMI socket (4)
TACC: MPI job exited with code: 1
TACC: Shutdown complete. Exiting.

You might also like