In addition to node-to-node parallelism using MPI, automatic parallelization and intra-node parallel execution using OpenMP are possible.
How to use Automatic parallelization / OpenMP


You can use NEC MPI compiler on SQUID vector nodes. please note that a command is different for each programming language.

MPI and Auto Parallelization

$ module load BaseVEC/2021
$ mpinfort -mparallel [options] source_file (Fortran)
$ mpincc -mparallel [options] source_file (C)
$ mpinc++ -mparallel [options] source_file (C++)

MPI and OpenMP

$ module load BaseVEC/2021
$ mpiifort -fopenmp [options] source_file (Fortran)
$ mpiicc -fopenmp [options] source_file (C)
$ mpiicpc -fopenmp [options] source_file (C++)


job script

The following is an example script for running NEC MPI & Auto-Parallelization/OpenMP program on SQUID vector node group.
4 nodes parallel execution (32 vector engines are executed in parallel by NEC MPI, each 10 cores in a vector engine are executed in parallel by automatic parallelization/OpenMP*), elapsed time It becomes a script that executes a batch request of MPI in 1 hour. 8 vector engines (SX-Aurora TSUBASA) are installed per node. Vector engine (SX-Aurora TSUBASA) is equipped with 10 cores per node. The number of vector engines used is specified in venode.


Notes on multi-node execution

If you specify options or environment variables with "setenv (option name)" or similar in the job script, they are set only on the master node and not on the slave nodes.
If you want to reflect the setting to all nodes, please specify "#PBS -v (option name)". The details are described in the following.
How to write the job script Specify the environment variable