Infrastructure

At the ITER Supercomputing Centre, we offer a robust and advanced infrastructure that integrates state-of-the-art computing, storage and communications capabilities. Designed to meet the most challenging demands of research and technology development, our facilities provide the performance, reliability and flexibility needed to power projects of any size. With a focus on efficiency and security, our solutions are ready to support and enhance the most innovative initiatives of public and private entities, providing an ideal environment for scientific and technological progress.

TeideHPC

General purpose computer

TeideHPC came into service in 2013 as a fundamental part of the ALiX project for the implementation of infrastructures aimed at creating an industrial fabric around Information and Communication Technology (ICT) in Tenerife. TeideHPC was present in the top500 list of the most powerful supercomputers in the world, occupying 138th place in the November 2013 list.

For the implementation of this infrastructure, ITER received a total of 8.5 million euros in the framework of the INNPLANTA programme of the Ministry of Innovation and Science, charged to FEDER funds for the acquisition of scientific-technological infrastructures aimed at 
R+D+i.

7

Sandy bridge Platforms

The computing core of the infrastructure is formed by 1028 nodes with Intel Sandy Bridge processors:

 

  • Two Intel Xeon E5-2670 processors with 8 cores / 16 T @ 2,60 GHz 16 MB
  • 32 GB RAM DDR-3 1600
  • 500GB HDD
  • Advanced remote management
  • 2 Gigabit ethernet ports
  • 1 IB QDR enhanced port
7

Plataformas Ivy bridge

72 Ivy Bridge processor technology are also available:

 

  • Two Intel Xeon E5-2670v2 with 10 cores / 20 T @ 2,60 GHz 16 MB
  • 32 GB RAM DDR-3 1600
  • 500GB HDD
  • Advanced remote management
  • 2 Gigabit ethernet ports
  • 1 IB QDR enhanced port
7

Fat nodes

There is a fat node platform formed by three FUJITSU PRIMERGY RX500 servers with the following hardware setup:

 

  • Four Intel Xeon E5-4620 processors with 8 cores / 16 T @ 2,20 GHz 16 MB
  • 256 GB RAM DDR-3 1600
  • 2 300GB HDD in RAID 1
  • Advanced remote management
  • 4 Gigabit ethernet ports
  • 1 IB QDR enhanced port

AnagaGPU

General purpose supercomputer equipped with GPU

GPU-equipped supercomputer : Optimised for AI applications

7

4 GPU nodes

AnagaGPU has 15 high-performance nodes, each equipped with 4 nVIDIA A100 GPUs, providing 40 GB of memory per GPU. Each node has 256 GB of RAM, providing massive and efficient processing power for deep learning, complex simulations and big data analytics.

7

8 GPU Node

For applications that require even more processing power, AnagaGPU includes a single node with 8 nVIDIA A100 GPUs at 40GB each and 512GB of RAM. This unique node is designed to support extremely demanding workloads such as large-scale artificial intelligence models and data-intensive research projects.

7

Visualisation Nodes

In addition, AnagaGPU has 4 visualisation nodes, each with an nVIDIA T4 GPU and 256 GB of RAM. These nodes are optimised for advanced visualisation tasks, graphics processing and support for virtual and augmented reality environments, providing a versatile and powerful platform for various visual applications.

7

Features

With a network infrastructure based on Infiniband EDR, AnagaGPU ensures high-speed, low-latency communication between nodes, optimising overall system performance. The total computational capacity of AnagaGPU reaches a theoretical peak performance (Rpeak) of 1.25 PFLOPS and a real peak performance (Rmax) of 681.90 TFLOPS.

Storage

The Supercomputing Centre offers a state-of-the-art shared storage infrastructure, with a unified block and file system providing 2.2 PB net capacity. This robust system ensures fast and efficient access to large volumes of data, optimising performance for advanced research projects and business-critical applications, facilitating collaboration and the seamless flow of information in a secure and highly available environment.

Networking

Teide-HPC networking solution relays on a four networks topology, each one with a specific purpuose. Three networks are based on ethernet technology and are used for Out-Of-Band management, on band management and storage access. The fourth network is an Infiniband technology one, used to inter-node communication in parallel processing.

Management Network

Storage network

Low-latency network

Shared for management and OOB

1 GbE

20GbE in aggregation layer for HA

40GbE in the backbone layer for HA

Storage dedicated

1 GbE

20GbE in aggregation layer for HA

40GbE in the backbone layer for HA

Infiniband QDR

40 Gbps bandwidth

Blocking factor 5:1

Connectivity

Project ALiX

Proyecto Alix (Alix Project) is a Council of Tenerife proposal which has been led by the Instituto Tecnológico y de Energías Renovables (Technological Institute of Renewable Energy), ITER SA Its main objective is to improve the competitiveness of the ICT sector in the Canary Islands, which is conducted through its three pillars:

  • D-ALiX. Datacenter ALiX project. Placement services and offers a competitive offer massive external communications.
  • Canalink. Responsible for laying and launched a neutral submarine cable system.
  • IT3. Responsible for the deployment of a terrestrial fiber optic ring island in Tenerife.

RedIris

RedIRIS is the Spanish academic and research network that provides advanced communication services to the scientific community and national universities. It is funded by the Ministry of Economy and Competitiveness and is included in the Ministry’s map of Special Scientific and Technological Facilities (ICTS). It is managed by the Public Corporate Entity Red.es, which reports to the Ministry of Industry, Energy and Tourism.