Differences
This shows you the differences between two versions of the page.
Both sides previous revision
Previous revision
|
|
facilities [2022/08/29 14:30] root |
facilities [2024/10/15 13:57] (current) root |
=== Facilities === | === Facilities === |
| |
The Pinnacle I and II clusters from 2019 and 2022 are the newest resources at AHPCC. Pinnacle II consists of 71 AMD based compute nodes with a total of 74 NVidia Ampere class GPUs. Pinnacle I consists of 106 mostly Intel based compute nodes with a total of 26 NVidia Volta class GPUs. Total floating point capacity is more than one PetaFlops (one quadrillion 64-bit floating point operations per second), mostly contributed by the GPUs. A significant fraction of the compute nodes are funded by research projects and reserved for their first use, including specialty nodes such as 4 TeraBytes of memory and quad GPU nodes. | The Pinnacle Phase I (2019), Phase II (2022), and Phase III (2024) clusters are the major compute resources at AHPCC. |
| |
These systems are interconnected with an Infiniband network fabric at 100 Gb/s and also connected to about 2.5 PetaBytes of high-speed parallel storage and about 4 PetaBytes of nearline storage. | Pinnacle Phase I consists of 106 mostly Intel Skylake based compute nodes with a total of 26 NVidia mostly Volta V100 GPUs. Phase II consists of 79 AMD Zen based compute nodes with a total of 74 NVidia mostly Ampere A100 GPUs. Phase III consists of 36 AMD Zen based compute nodes with 4 Ampere L40 GPUs. An awarded CC* grant will augment Phase III with non-GPU nodes in 2025. |
| |
About 250 older machines are connected as the “Trestles” cluster and are used for less demanding tasks. | Total floating point capacity is more than one PetaFlop (one quadrillion 64-bit floating point operations per second), mostly contributed by the GPUs. About half of the compute nodes are "condo nodes" funded by research groups and reserved for their first use, including specialty nodes such as 4 TB high-memory and quad GPU nodes. |
| |
A "Science DMZ" connects these systems at 100 Gb/s to the ARE-ON state and Internet2 national research networks and the UAMS HPC “Grace” system. Also on the Science DMZ are elements of regional and national grids such as PRP Nautilus, Great Plains Network, and Open Science Grid. A project is underway to make AHPCC and UAMS HPC functionally a single system available to all Arkansas research users. | These systems are interconnected with an NVidia Infiniband network fabric with a 400 Gb/s backbone and node connections of 40 to 200 Gb/s. Parallel storage is served through Infiniband as a tiered system of about 400 TB of high speed NVMe for active data and about 4.5 PB of hard disk. There is also about 4.5 PB of nearline hard disk archive. |
| |
| Older computers are used for less demanding tasks and are connected as about 100 pre-Skylake condo nodes plus about 150 nodes of the soon to be retired “Trestles” cluster. |
| |
| A "Science DMZ" connects these systems at 100 Gb/s to the ARE-ON state and Internet2 national research networks and the UAMS HPC “Grace” system. Also on the Science DMZ are nodes of and connections to regional and national grids such as PRP/NRP Nautilus, Great Plains Network, and Open Science Grid. A project is underway to make AHPCC and UAMS HPC functionally a single system available to all Arkansas research users. |
| |
| |
| |