LAMMPS is an open source molecular dynamics application, and SQUID enables the use of LAMMPS by batch request.
Basic use (for General Purpose CPU nodes: not GPU)
The execution of LAMMPS is permitted only by processing by batch request. The example of the job script and the job execution method for LAMMPS(29-Oct-20) are described below.
* Please refer to the official website for the manual about LAMMPS.
Writing a job script file
The following example is a job script when LAMMPS is executed with 152 processes (2 nodes are used, 76 processes per node). Although there is no particular specification for the file name, we named "lammps.sh" in this section.
1 2 3 4 5 6 7 8 9 10 11 12 |
#!/bin/bash #PBS -q SQUID #PBS -l cpunum_job=76 #PBS --group=[group name] #PBS -l elapstim_req=01:00:00 #PBS -b 2 #PBS -T intmpi #PBS -v OMP_NUM_THREADS=1 module load BaseApp module load lammps/29-Oct-20 cd $PBS_O_WORKDIR mpirun ${NQSV_MPIOPTS} -np 152 lmp < input_file |
For various input files of LAMMPS, please see official HP. About other lines of job script, please see here.
Packages
The packages already installed are as follows.
ASPHERE,BODY,CLASS2,COLLOID,COMPRESS,CORESHELL,DIPOLE,GRANULAR,KSPACE,
MANYBODY,MC,MISC,MOLECULE,MPIIO,OPT,PERI,POEMS,PYTHON,QEQ,REPLICA,RIGID,
SHOCK,SNAP,SPIN,SRD,VORONOI,USER-ATC,USER-AWPMD,SER-BOCS,USER-CGDNA,
USER-CGSDK,USER-COLVARS,USER-DIFFRACTION,USER-DPD,USER-DRUDE,USER-EFF,
USER-FEP,USER-INTEL,USER-LB,USER-MANIFOLD,USER-MEAMC,USER-MESODPD,
USER-MGPT,USER-MISC,USER-MOFFF,USER-NETCDF,USER-OMP,USER-PHONON,USER-QTB,
USER-REAXC,USER-SMTBQ,USER-SPH,USER-TALLY,USER-UEF
How to execute
Submit the job script you created.
% qsub lammps.sh
How to check the status of the submitted job is here. When the execution is finished, the calculation result is output to the result file.
Execute with GPU
The GPU version is LAMMPS (29-Sep-21). The basic usage is the same as for the CPU-only version, but the job script is different.
Writing a job script file
The following is an example with 16 processes and 16 GPU parallel execution (8 processes and 8 GPUs per node).
1 2 3 4 5 6 7 8 9 10 11 12 |
#!/bin/bash #PBS -q SQUID #PBS -l elapstim_req=01:00:00,cpunum_job=8,gpunum_job=8 #PBS --group=[グループ名] #PBS -b 2 #PBS -T openmpi #PBS -v OMP_NUM_THREADS=1 #PBS -v NQSV_MPI_MODULE=BaseApp/2022:lammps/29-Sep-21.GPU module load BaseApp/2022 module load lammps/29-Sep-21.GPU cd $PBS_O_WORKDIR mpirun ${NQSV_MPIOPTS} -np 16 lmp -sf gpu -pk gpu 8 < input_file |
For various input files of LAMMPS, please see official HP. About other lines of job script, please see here.
Packages
The packages already installed for GPU version are as follows.
ASPHERE,BODY,CLASS2,COLLOID,COMPRESS,CORESHELL,DIPOLE,GRANULAR,KSPACE,
MANYBODY,MC,MISC,MOLECULE,MPIIO,OPT,PERI,POEMS,PYTHON,QEQ,REPLICA,RIGID,
SHOCK,SNAP,SPIN,SRD,VORONOI, ,USER-AWPMD,SER-BOCS,USER-CGDNA,
USER-CGSDK,USER-COLVARS,USER-DIFFRACTION,USER-DPD,USER-DRUDE,USER-EFF,
USER-FEP,USER-INTEL,USER-LB,USER-MANIFOLD,USER-MEAMC,USER-MESODPD,
USER-MGPT,USER-MISC,USER-MOFFF,USER-NETCDF,USER-OMP,USER-PHONON,USER-QTB,
USER-REAXC,USER-SMTBQ,USER-SPH,USER-TALLY,USER-UEF