The job class table of the PC cluster system for OCTOPUS is as follows:

 

General purpose CPU node cluster

mode job class Mximum elapsed time Maximum number of Core Maximum size of memory Maximum number of nodes remarks
shared use mode OCTOPUS 120 hours 3,072 Core
(24Core×128nodes)
24,320 GB
(190GB×128nodes)
128 nodes Please consult us if you want to use over 128 nodes.
DBG 10 minutes 24 Core
(24Core×1node)
190 GB
(190GB×1node)
1 node for debug
INTC 120 hours 3,072 Core
(24Core×128nodes)
24,320 GB
(190GB×128nodes)
128 nodes interactive use(suspended between free trial)
Please consult us if you want to use over 128 nodes.
dedicated use myOCTOPUS unlimited 24Core × number of dedicated nodes 190GB × number of dedicated nodes number of dedicated nodes (suspended between free trial)

 

GPU node cluster

mode job class Mximum elapsed time Maximum number of Core Maximum size of memory Maximum number of nodes remarks
shared use mode OCTOPUS 120 hours 768 Core
(24Core×32nodes)
6,080 GB
(190GB×32nodes)
32 nodes 4 GPUs per a node.
Please consult us if you want to use over 32 nodes.
DBG 10 minutes 24 Core
(24Core×1node)
190 GB
(190GB×1node)
1 node for debug
INTG         interactive use.(suspended between free trial)
Please consult us if you want to use over 32 nodes.
dedicated use myOCTOPUS unlimited 24Core × number of dedicated nodes 190GB × number of dedicated nodes number of dedicated nodes (suspended between free trial)

 

Xeon Phi node cluster

mode job class Mximum elapsed time Maximum number of Core Maximum size of memory Maximum number of nodes remarks
shared use mode OCTPHI 120 hours 2,048 Core
(64Core×32nodes)
6,080 GB
(190GB×32nodes)
32 nodes Please consult us if you want to use over 32 nodes.
INTP         interactive use.(suspended between free trial)
Please consult us if you want to use over 32 nodes.
dedicated use myOCTPHI unlimited 64Core × number of dedicated nodes 190GB × number of dedicated nodes number of dedicated nodes (suspended between free trial)

 

Large-scale Shared memory node cluster

mode job class Mximum elapsed time Maximum number of Core Maximum size of memory Maximum number of nodes remarks
shared use mode OCTMEM 120 hours 256 Core
(128Core×2nodes)
12 TB
(6TB×2nodes)
2 nodes  
INTM         interactive use(suspended between free trial)

 

Job Execution Class

Job requests submitted to the above job class are categorized further into several “Job Execution Classes” . (The reason why the job class table is divided is caused by this mechanism).
In principle, users are not aware of the “Job Execution Class”, but the job execution class is displayed when users use scheduler commands such as qstat and sstat.
 

Job execution class in shared-use mode

  General pupose CPU node cluster GPU node cluster XeonPhi node cluster Large-scale Shared memory node cluster
number of nodes job execution
class
job execution
class
(HPCI user)
job execution
class
job execution
class
(HPCI user)
job execution
class
job execution
class
(HPCI user)
job execution
class
256-129 nodes OC256C OC256H - - - - -
128-65 nodes OC128C OC128H
64-33 nodes OC64C OC64H
32-17 nodes OC32C OC32H OG32C OG32H OP32C OP32H
16-2 nodes OC16C OC16H OG16C OG16H OP16C OP16H
1 nodes OC1C OC1H OG1C OG1H OP1C OP1H OCTMEM
1 nodes(in the case of "DBG" jobclass) DBG-C DBG-C DBG-G DBG-G - - -

 

Job execution class in shared-use mode

General pupose CPU node cluster GPU node cluster Xeon Phi node cluster
job execution
class
job execution
class
(HPCI user)
job execution
class
job execution
class
(HPCI user)
job execution
class
job execution
class
(HPCI user)
OC[GID] OC[subject ID] OG[GID] OG[subject ID] OP[GID] OP[subject ID]

Limitations of Resource Usage

Each queue has an upper limit in terms of elapsed time, the number of CPUs available, and the number of nodes available, etc.
Users are required to specify their requests within the limitations in their job script file.