I want to use ppn, rr, and prehost options with Intel MPI

In OCTOPUS and SQUID, the machinefile is automatically generated and set to MPI based on the value specified by PBS -l cpunum_job using the environment variable $NQSII_MPIOPTS / $NQSV_MPIOPT. Options such as ppn, rr, and prehost cannot be specified at the same time as the machinefile.
Even if we assume that we allocate 128 MPI processes / 64 processes per node, and specify them as follows, the ppn option will be invalid:

mpirun ${NQSV_MPIOPTS} -np 128 -ppn 64 ./a.out

Basically, 128 MPI processes can be created and 64 processes can be allocated per node by specifying the following.

#PBS -l cpunum_job=64
(...)
mpirun ${NQSV_MPIOPTS} -np 128 ./a.out

The environment variable $NQSV_MPIOPTS specifies the following options and files.

-machinefile /var/opt/nec/nqsv/jsv/jobfile/[a number of requestID and etc.]/mpinodes

The mpinodes file is a machine file, and in the above case, the host name and number of cores are specified as follows.

host001:64
host002:64

 
However, if you want to specify the process placement more precisely (e.g., if you want to set up pinning and compute without using processes on specific cores), the above options may not work in some cases. When using ppn, rr, and prehost options, specify the hostfile option and $PBS_NODEFILE environment variable instead of $NQSII_MPIOPTS / $NQSV_MPIOPT. 128 To create MPI processes and allocate 64 processes per node, specify as follows.

mpirun -hostfile ${PBS_NODEFILE} -np 128 -ppn 64 ./a.out

If you use PBS_NODEFILE, the value specified by #PBS -l cpunum_job will not be set to MPI. Please check the number of processes by yourself.