You have to use "NEC MPI" for processing across multiple vector engines (distributed memory parallel processing) in SQUID vector nodes.
 

About MPI

MPI is a de facto standard specification of message passing for distributed memory parallel processing.
 
MPI allows users to perform computations using multiple computing nodes. Therefore, users can benefit from a large amount of memory space in total. However, as users themselves have to explicitly write internode communications and split computation, programming is difficult.
 
MPI can be used in both C and Fortran. Although the writing style is different for them, this page uses Fortran for the explanation below.
 

Main functionality

MPI offers the following functionalities.
 
・Process management
Initialize and terminate program
・Peer-to-peer communication
Communication which pair of processes is involved.
・Collective communication
Communication which multiple processes in a communicator are involved.

 

Basic usage

Most of the labor when writing the MPI program is derived from program design and there are not many things to learn for MPI use. The user can make an advanced MPI program just by mastering the basics of MPI on this page.
 
An example of the MPI program is below. This page uses the program below for reference.

 

The highlighted parts (Lines 2, 4-6, and 8) are the core of the MPI program (MPI subroutine and MPI-related processing)..
 

Line 2 specifies include files to include necessary files for execution. You must do this.
 

The program execution from MPI_INIT to MPI_Finalize is the target of MPI parallelism. When MPI_INIT is called, multiple processes are generated. Lines 5-7 are processed in parallel.

 

MPI_COMM_RANK in Line 5 is a function to obtain the identification number (rank) of each parallel process. MPI_COMM_SIZE in Line 6 is a function to obtain the total number of parallel processes involved in the parallel computation.

 

The number of parallel processes is specified when you start your MPI program.
 

The result of executing the above program with 4 processes is as follows:

HELLO WORLD myrank= 3 ( 4 processes)
HELLO WORLD myrank= 1 ( 4 processes)
HELLO WORLD myrank= 0 ( 4 processes)
HELLO WORLD myrank= 2 ( 4 processes)

 

Basic subroutine

 

MPI_INIT (ierr)

This must be called before any MPI functions.

argument
ierr 0 when program ends normally. Another number when abnormally

 

MPI_FINALIZE (ierr)

This must be called for finalization of MPI execution.

argument
ierr

 

MPI_COMM_SIZE(comm, size, ierr)

This returns the total number of parallel processes in a communicator "comm" to "size".
Usually, MPI_COMM_WORLD, a de facto communicator of MPI is used for "comm". This communicator can be divided as you like and you can design your MPI program for a complicated process.

argument
comm specify communicator. "MPI_COMM_WORLD" is used in most cases.
size the number of parallel processes is returned.
ierr 0 when the program ends normally. Another number when abnormally

 

MPI_COMM_RANK(comm, size, ierr)

Each parallel process obtains its own identification number called rank in the communicator "comm".

argument
comm specify communicator. "MPI_COMM_WORLD" is used in most cases.
rank Identification number of each process is returned.
ierr 0 when program ends normally. Another number when abnormally

 
The above example uses very basic subroutines. Of course, many useful subroutines are prepared.
See [Reference] for detail.
 

Compilation

Intel MPI compiler is available. Please be aware that the command name is different on per-language basis.

$module load BaseVEC

$ mpinfort [options] source_file (Fortran)
$ mpincc [options] source_file (C)
$ mpinc++ [options] source_file (C++)

Please see the following page for available options.
How to use NEC compiler

 

 

Execution script

The execution command of MPI is as follows:

mpirun ${NQSV_MPIOPTS} -np (number of processes) execution-filename

 

An example script is shown below. Parallel execution on 2 nodes (16 vector engines in parallel: 160 cores in total*) Elapsed time The script executes MPI batch requests in 1 hour. 8 vector engines (SX-Aurota TSUBASA) are installed per node. Vector engine (SX-Aurota TSUBASA) has 10 cores per node. The number of vector engines used is specified in venode.

 
 

cautions in performing multi-node computation

In the case of specifying options and environmental variables in your job script file, such specification is reflected only on the master node's environment, and not on the slave nodes.
If you want to reflect your configuration to all nodes, you must add "#PBS -v (option name)" in your script file.
Details are below:
How to describe a job script specification of an environmental variable
 

 

Reference

SX-Aurora TSUBASA