Home > Author Archive
2024.04.23

mdxⅡ

The “mdxⅡ“ is a system with a total theoretical computational performance of 430.08 TFLOPS, consisting of regular compute nodes (Red Hat OpenStack Platform), interoperable nodes (VMware vSphere), and storage groups. It is operated by the institutions that make up the Data Utilization Society Creation Platform collaborative project. Each resource within the system is provided by a virtual machine. Please see this page for a detail.

 

System Configuration

Total Computational Performance 430.08 TFLOPS
Node Configuration Regular Compute Nodes
54 Nodes
(387.072 TFLOPS)
・CPU: Intel Xeon Platinum 8480+ Processor
(2.0 GHz 56 cores) x 2
・Theoretical Computational Performance (per node): 7.168 TFLOPS
・Main Memory Capacity: 512GB
・Auxiliary Memory Capacity: 960GB
Interoperable Nodes
6 Nodes
(43.008 TFLOPS)
・CPU: Intel Xeon Platinum 8480+ Processor
(2.0 GHz 56 cores) x 2
・Theoretical Computational Performance (per node): 7.168 TFLOPS
・Main Memory Capacity: 512GB
・Auxiliary Memory Capacity: 960GB
Storages Lustre File Storage DDN ExaScaler
・Effective Usage Capacity: NVMe 553.24TB
Object Storage Cloudian HyperStore
・Effective Usage Capacity: 432TB
Node Interconnection 200GbE Ethernet

 

Virtual Machines

In mdxⅡ, you can build virtual machines with specifications according to your needs.
For example, with an application for a 100CPU pack, you can use a CPU: 100 virtual cores / Memory: 200GB.
*1 virtual core is equivalent to 0.5 physical cores.

CPU pack
CPU core count 1 virtual core
Memory 2GB
Theoretical computational performance per virtual core 約 32 GFLOPS
Maximum pack count that can be allocated to 1 virtual machine 224 packs(224 virtual cores)

 

2024.03.13

OCTOPUS

OCTOPUS retired on March 29, 2024.
 
OCTOPUS (Osaka university Cybermedia cenTer Over-Petascale Universal Supercomputer) is a cluster system starts its operation in December 2017. This system is composed of different types of 4 clusters, General purpose CPU nodes, GPU nodes, Xeon Phi nodes and Large-scale shared-memory nodes, total 319 nodes.

System Configuration

Theoretical Computing Speed 1.463 PFLOPS
Compute Node General purpose CPU nodes
236 nodes (471.24 TFLOPS)
CPU : Intel Xeon Gold 6126 (Skylake / 2.6 GHz 12 cores) 2 CPUs
Memory : 192GB
GPU nodes
37 nodes (858.28 TFLOPS)
CPU : Intel Xeon Gold 6126 (Skylake / 2.6 GHz 12 cores) 2 CPUs
GPU : NVIDIA Tesla P100 (NV-Link) 4 units
Memory : 192GB
Xeon Phi nodes
44 nodes (117.14 TFLOPS)
CPU : Intel Xeon Phi 7210 (Knights Landing / 1.3 GHz 64 cores) 1 CPU
Memory : 192GB
Large-scale shared-memory nodes
2 nodes (16.38 TFLOPS)
CPU : Intel Xeon Platinum 8153 (Skylake / 2.0 GHz 16 cores) 8 CPUs
Memory : 6TB
Interconnect InfiniBand EDR (100 Gbps)
Stroage DDN EXAScaler (Lustre / 3.1 PB)

* This is an information on July 21, 2017. Therefore, there may a bit of a discrepancy about performance values. Thank you for your understanding.
 

Gallery

      
 

Technical material

Please see following page:
ペタフロップス級ハイブリッド型スーパーコンピュータ OCTOPUS : Osaka university Cybermedia cenTer Over-Petascale Universal Supercomputer ~サイバーメディアセンターのスーパーコンピューティング事業の再生と躍進にむけて~ [DOI: 10.18910/70826]

2020.11.25

SQUID

SQUID(Supercomputer for Quest to Unsolved Interdisciplinary Datascience), aka, (Supercomputer system for HPC and HPDA)
starts operation on May 1, 2021. This system is composed of different types of 3 clusters, General purpose CPU nodes, GPU nodes, Vector nodes. Total Peak performance is 16.591 PFLOPS.
 

System Configuration

Theoretical Computing Speed 16.591 PFLOPS
Compute Node General purpose CPU nodes
1,520 nodes (8.871 PFLOPS)
CPU:Intel Xeon Platinum 8368 (Icelake / 2.4 GHz 38 cores) 2 CPUs
Memory:256GB
GPU nodes
42 nodes (6.797 PFLOPS)
CPU: Intel Xeon Platinum 8368 (Icelake / 2.4 GHz 38 cores) 2 CPUs
Memory: 512GB
GPU:NVIDIA A100 8 units
Vector nodes
36 nodes (0.922 PFLOPS)
Vector Host AMD EPYC 7402P (2.8 GHz 24 cores) 1 CPU
Memory: 128GB
Vector Engine NEC SX-Aurora TSUBASA Type20A(10 cores) 8 units
Memory: 48GB
Storage DDN EXAScaler (Lustre) HDD:20.0 PB
NVMe:1.2 PB
Interconnect Mellanox InfiniBand HDR (200 Gbps)

 

How to use

 

Gallery

   

   
 

News

2016.10.15

ONION

ONION(Osaka university Next-generation Infrastructure for Open research and open InnovatioN) is a data aggregation platform that is linked to SQUID. ONION consists of "EXAScaler" for a file system of SQUID,"ONION-file" for WEB storage service, and "ONION-object" for an object storage.
ONION makes it easy for you to transfer data between your PC and the supercomputer. In addition, ONION can be used in a variety of ways, such as immediately sharing calculation results with overseas or corporate collaborators who do not have a SQUID or OCTOPUS account, or manipulating data from a smartphone. Of course, it can also be used to store and share research data in the laboratory.
 

The following paper describes the background, system configuration, and details of the functions.
ONION Osaka University's Data Aggregation Infrastructure
 

Application for use

We provide ExXAScaler and ONION-file as a part of SQUID. Please see this page for a detail of application.
If you use ONION-object, you have to apply separately from SQUID. Plaease see this page for application, consult, or detail of service.

* Please refer to here for the usage fees of SQUID and ONION-object. ONION-object is referred to as "ONION (object storage)".

 

System configuration

EXAScaler

EXAScaler is a parallel file system made by DDN which is based on Lustre.
 
Main features
- Save, view, move and delete data
- Input and output data from SQUID calculation nodes
- Access data from client software that supports SFTP and S3

effective capacity (HDD) 20 PB
Effective capacity (NVMe) 1.2 PB
Max number of inodes about 8.8 Billion
Max assumed effective throughput (HDD) Over 160 GB/s
Max assumed effective throughput (NVMe) Write : Over 160 GB/s
Read : Over 180 GB/s

 

ONION-file

ONION-file is a WEB storage service provided by Nextcloud.
All operations and settings can be performed through a web browser. In default, only SQUID home area on EXAScaler is linked, but any external storage compatible with WebDAV, SFTP, and S3 can be linked. (For example, work area of OCTOPUS, and ONION-object described below can also be linked.)
 
The following operations can be performed on the linked storage from a web browser.
- Save, view, move, and delete data
- Publish URL, and share your data with those who do not use SQUID or ONION (Download / Upload).

 

ONION-object

ONION-object is an object storage provided by Cloudian HyperStore. ONION-object is an AWS S3 compatible object storage that is independent of SQUID and OCTOPUS, allowing for easy data exchange with external clouds and S3 compatible storage. For an overview of the service and to discuss and apply for its use, please refer to the here page.  
Main Feature
- Save, browse, move, and delete data
- Operate objects and buckets with S3 API (some of them can be operated from a web browser)

effective capacity 950 TiB
* We plan to expanse sequentially
data protection method Erasure Coding
(Data chunk:4 + Parity chunk:2)

 

Notes

    ONION-object is operated with the utmost care by Cybermedia Center and NEC Corporation, the vendor of SQUID, but data is not backed up. Therefore, there is a possibility of data lost due to system failure, unforeseen accidents, or natural disasters. Cybermedia Center will not be responsible for any data lost, so please back up all necessary files by yourself. In addition, please note that ONION-object is a trial service, and scheduled to be terminated at the end of April 2026 if no budgetary measures are taken by Osaka University.

 

How to use

Please see the following page:
How to use ONION

 

Gallery

  

public relations materials

2007.01.19

SX-5

本システムの提供は終了しました。(提供期間:2001年1月-2007年1月)

SX-5/128M8は16個のベクトルプロセッサと128GBの主記憶を搭載したNEC SX-5/16Afの8ノードと、64Gbpsの専用ノード間接続装置IXS、800MbpsのHiPPIおよび1GbosのGigabit Ethernetによって接続したクラスタ型スーパーコンピューティングシステムです。
 

システム全体の仕様

理論演算性能 1.2 TFLOPS
ノード数 8
CPU数 128
定格消費電力 443.36 kVA

1ノードあたりの仕様

CPU NEC SX-5/16Af
CPU数 16
メモリ容量 128 GB
メモリ帯域 16 GB/s
理論演算性能 160 GFLOPS
定格消費電力 55.42 kVA