The following is SQUID job class table. Please note that the job classes differ depending on your usage.

 

General Purpose CPU nodes

usage job class Mximum elapsed time Maximum number of Core Maximum size of memory Maximum number of nodes remarks
Shared use SQUID 120 hours 38,912 cores (76 cores × 512 nodes) 124 TB (248 GB × 512 nodes) 512 nodes -
SQUID-R 120 hours 38,912 cores (76 cores × 512 nodes) 124 TB (248 GB × 512 nodes) 512 nodes This job class permits allocation of nodes in multiple sub-clusters.(*1)
SQUID-H 120 hours 38,912 cores (76 cores × 512 nodes) 124 TB (248 GB × 512 nodes) 512 nodes High Priority(*2)
SQUID-S 120 hours 38 cores (76 cores × a half node) 124 GB (248 GB × a half node) a half node Allows multiple jobs to be assigned to a single node.(*3)
DBG 10 minutes 152 cores (76 cores × 2 nodes) 496 GB (248 GB × 2 nodes) 2 nodes for debug use
INTC 10 minutes 152 cores (76 cores × 2 nodes) 496 GB (248 GB × 2 nodes) 2 nodes for interactive use
dedicated use mySQUID unlimited 76 cores × number of dedicated nodes 248 GB x number of dedicated nodes number of dedicated nodes -

*1: This queue allows cross-cluster allocation (of routes with narrow interconnection network bandwidth). The execution wait time may be reduced.
*2: The execution waiting time is shortened because of the high priority, but SQUID point consumption becomes large.
*3: Allows resource sharing with other jobs within one node. A queue that allows resource sharing within a single node with other jobs. Point consumption becomes small, but may be affected by other jobs.

 

 

GPU nodes

usage job class Mximum elapsed time Maximum number of Core Maximum size of memory Maximum number of nodes remarks
Shared use SQUID 120 hours 2,432 cores (76 cores × 32 nodes) 15.75 TB (504 GB × 32 nodes) 32 nodes 8 GPUs per 1 node
SQUID-H 120 hours 2,432 cores (76 cores × 32 nodes) 15.75 TB (504GB × 32 nodes) 32 nodes 8 GPU per 1 node
high priority.(*1)
SQUID-S 120 hours 38 cores (76 cores × a half node) 252 GB (504 GB × a half node) a half node maximum 4 GPUs
Allows multiple jobs to be assigned to a single node.(*2)
DBG 10 minutes 152 cores (76 cores × 2 nodes) 1,008 GB (504 GB × 2 nodes 2 nodes 8 GPU per 1 node
for debug use
INTG 10 minutes 152 cores (76 cores × 2 nodes) 1,008 GB (504 GB × 2 nodes) 2 nodes 8 GPU per 1node
for interactive use
Dedicated use mySQUID unlimited 76 core × dedicated node 504 GB x dedicated node number of dedicated node  

*1: The execution waiting time is shortened because of the high priority, but SQUID point consumption becomes large.
*2: Allows resource sharing with other jobs within one node. A queue that allows resource sharing within a single node with other jobs. Point consumption becomes small, but may be affected by other jobs.

 

Vector nodes

usage job class Mximum elapsed time Maximum number of Core Maximum size of memory Maximum number of nodes remarks
Shared use SQUID 120 hours 2,560 cores (10 cores × 256 VE) 12 TB
(48GB×256VE)
256 VE maximum 8 VE per 1 node
SQUID-H 120 hours 2,560 cores (10 cores × 256 VE) 12 TB (48 GB × 256 VE) 256 VE 8 VE per 1 node
high priority (*1)
SQUID-S 120 hours 40 Core
(10Core×4VE)
192 GB
(48GB×4VE)
4VE 8 VE per 1node
for interactive use (*2)
DBG 10分 40 Core
(10Core×4VE)
192 GB
(48GB×4VE)
4VE 8 VE per 1 node
for debug use
INTV 10分 40 Core
(10Core×4VE)
192 GB
(48GB×4VE)
4VE 8 VE per 1node
for interactive use
Dedicated use mySQUID unlimited 10Core × dedicated VEs 48GB × dedicated VEs dedicated VEs  

*1: The execution waiting time is shortened because of the high priority, but SQUID point consumption becomes large.
*2: Allows resource sharing with other jobs within one node. A queue that allows resource sharing within a single node with other jobs. Point consumption becomes small, but may be affected by other jobs.

 

Resource Limitations

For each queue, there are limits on elapsed time, number of CPUs, size of memory, and number of nodes.
When describing them in a job script, it is necessary to specify that the limits are not exceeded.