LAMMPS¶
LAMMPS is a molecular dynamics application for large-scale atomic/molecular parallel simulations of solid-state materials, soft matter and mesoscopic systems.
LAMMPS is available as a module on Apocrita.
Use Spack for additional variants
Simple variants of LAMMPS have been installed by the Apocrita ITSR Apps Team. Advanced users may want to self-install their own additional variants via Spack.
Versions¶
Regular and GPU accelerated versions have been installed on Apocrita.
Usage¶
To run the required version, load one of the following modules:
- For LAMMPS (non-GPU), load
lammps/<version>
- For GPU accelerated LAMMPS versions, load
lammps-gpu/<version>
To run the default installed version of LAMMPS, simply load the lammps
module:
$ module load lammps
$ mpirun -np ${NSLOTS} lmp -help
Usage example: lmp -var t 300 -echo screen -in in.alloy
...
For full usage documentation, pass the -help
option.
Example jobs¶
Make sure you have an accompanying data file
The examples below specify an input file only, but will often need an
accompanying data
file (e.g. data.file
) to be present in the same
directory as well. For more examples, see the
example benchmarks in
the official LAMMPS GitHub repository.
Serial job¶
Here is an example job running on 4 cores and 16GB of total memory:
#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 4
#$ -l h_rt=1:0:0
#$ -l h_vmem=4G
module load lammps
mpirun -np ${NSLOTS} lmp \
-in in.file \
-log output.log
Parallel job¶
Here is an example job running on 96 cores across 2 ddy nodes with MPI:
#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe parallel 96
#$ -l infiniband=ddy-i
#$ -l h_rt=240:0:0
module load lammps
mpirun -np ${NSLOTS} lmp \
-in in.file \
-log output.log
GPU jobs¶
Use the requested number of GPUs
Always export and use the $NGPUS
environment variable as below. Failure to
do this will results in errors such as:
ERROR: Could not find/initialize a specified accelerator device (src/src/GPU/gpu_extra.h:57)
Here is an example job running on 1 GPU:
#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 8
#$ -l h_rt=240:0:0
#$ -l h_vmem=11G
#$ -l gpu=1
# Load the GPU-accelerated version
module load lammps-gpu
# Export the number of GPUs available
NGPUS=$(nvidia-smi -L | wc -l)
mpirun -np ${NSLOTS} lmp \
-sf gpu \
-pk gpu ${NGPUS} \
-in in.lc \
-log in.lc.log
Here is an example job running on 2 GPUs:
#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 16
#$ -l h_rt=240:0:0
#$ -l h_vmem=11G
#$ -l gpu=2
# Load the GPU-accelerated version
module load lammps-gpu
# Export the number of GPUs available
NGPUS=$(nvidia-smi -L | wc -l)
mpirun -np ${NSLOTS} lmp \
-sf gpu \
-pk gpu ${NGPUS} \
-in in.lc \
-log in.lc.log