On SX-ACE, MPI and HPF can be used for internode parallelism (distributed memory parallelism).
Learn how to use MPI/SX, the MPI library for SX-ACE here.
※Please see this page for HPF.

The Cybermedia Center highly recommends novice users to use HPF because MPI is somewhat difficult for beginners.
 

About MPI

MPI is a de facto standard specification of message passing for distributed memory parallel processing.

 
MPI allows users to perform computations using multiple computing nodes. Therefore, users can benefit from large amount of memory space in total.
However, as users themselves have to explicitly write internode communications and split computation, programming is difficult.
 

MPI can be used in both C and Fortran. Although the writing style is different for them, this page uses Fortran for explanation below.

Main functionality

MPI offers the following functionalities.

 

・Process management
Initialize and terminate program
・Peer-to-peer communication
Communication which pair of processes are involved.
・Collective communication
Communication which multiple processes in a communicator are involved.

Basic usage

Most of the labor when writing the MPI program is derived from program design and there are not many things to learn for MPI use.
The user can make an advanced MPI program just by mastering the basics of MPI on this page.

 

An example of the MPI program is below. This page uses the program below for reference.

 

The highlighted parts (Lines 2, 4-6, and 8) are the core of the MPI program (MPI subroutine and MPI-related processing)..
 

Line 2 specifies include files to include necessary files for execution. You must do this.
 

The program execution from MPI_INIT to MPI_Finalize is the target of MPI parallelism. When MPI_INIT is called, multiple processes are generated. Lines 5-7 are processed in parallel.

 

MPI_COMM_RANK in Line 5 is a function to obtain the identification number (rank) of each parallel process. MPI_COMM_SIZE in Line 6 is a function to obtain the total number of parallel processes involved in parallel computation.

 

The number or parallel processes is specified when you start your MPI program.
 

The result of executing the above program with 4 processes is as follows:

HELLO WORLD myrank= 3 ( 4 processes)
HELLO WORLD myrank= 1 ( 4 processes)
HELLO WORLD myrank= 0 ( 4 processes)
HELLO WORLD myrank= 2 ( 4 processes)

 

Basic subroutine

 

MPI_INIT (ierr)

This must be called before any MPI functions.

argument
ierr 0 when program ends normally. Another number when abnormally

 

MPI_FINALIZE (ierr)

This must be called for finalization of MPI execution.

argument
ierr

 

MPI_COMM_SIZE(comm, size, ierr)

This returns the total number of parallel processes in a communicator "comm" to "size".
Usually, MPI_COMM_WORLD, a de facto communicator of MPI is used for "comm". This communicator can be divided as you like and you can design your MPI program for a complicated process.

argument
comm specify communicator. "MPI_COMM_WORLD" is used in most cases.
size the number of parallel processes is returned.
ierr 0 when the program ends normally. Another number when abnormally

 

MPI_COMM_RANK(comm, size, ierr)

Each parallel process obtains its own identification number called rank in the communicator "comm".

argument
comm specify communicator. "MPI_COMM_WORLD" is used in most cases.
rank Identification number of each process is returned.
ierr 0 when program ends normally. Another number when abnormally

 

The above example uses very basic subroutines. Of course many useful subroutines are prepared.
See [Reference] for detail.
 

Compilation

Please be aware that the command name is different on per-language basis.

$ sxmpif90 [options] source_file (Fortran)
$ sxmpicc [options] source_file (C)
$ sxmpic++ [options] source_file (C++)

Please see the following page for available options.

How to use the SX cross-compiler

 

 

Execution script

The execution command of MPI is as follows:

mpirun -nn (the number of computing node you want to use) -np (number of processes) execution filename

 

An example of a script is shown as follows. This example requests the MPI batch request that specifies 8 internode parallelism, 4 intranode parallelism on a node, the elapsed time of 1 hour, and 60GB memory.

 

cautions in performing multi-node computation

In the case of specifying options and environmental variables in your job script file, such specification is reflected only on the master node's environment, and not on the slave nodes.
If you want to reflect your configuration to all nodes, you must add "#PBS -v (option name)" in your script file.
Details are below:
How to describe a job script specification of an environmental variable

 

 

Reference

Please see the following page for more complicated usage.
MPI/SX manual(authentication required)

Also, our center provides a training session of HPF periodically. Please consider joining us !
Training session list Introduction to MPI programming
 

The following materials are actually for PC cluster systems, but are given here in case you are interested and for reference:
IntelMPI Library for Linux* OS Reference Manual (authentication required)
Introduction guide of IntelMPI library(authentication required)