AI FACTORIES & HPC
The first specialized AI Data Center in Latin America. Engineered for the massive computational demands of Large Language Models and Generative AI.
COMPUTE POWER
Powered by 288 NVIDIA H100/B200 GPUs in NVLink Spine configuration. Delivering 16.2 PF (H100) to 32.4 PF (B200) of AI performance.
NETWORK FABRIC
Spine-Leaf architecture with NVLink 4.0/5.0 providing 900/1800 GB/s per link. Intra-spine latency of just 300-500ns.
EFFICIENCY
World-class energy efficiency significantly outperforming the industry average of 1.58. Optimized for sustainable high-density computing.
TECHNICAL ARCHITECTURE
NVLINK SPINE CONFIGURATION
72 servers with 4 GPUs each, interconnected via 36 NVLink switches. This architecture creates a massive, unified memory pool of up to 9.2 TB HBM.
LIQUID COOLING
Direct-to-chip liquid cooling technology capable of handling 142.1 TR (H100) to 190.2 TR (B200) thermal loads. Cold/Hot aisle containment for maximum efficiency.
STORAGE & RETRIEVAL
2 PB NVMe high-performance storage in RAID configuration + 10 PB distributed file system for archival. Designed for massive dataset ingestion.

GLOBAL CONNECTIVITY
READY TO SCALE?
Secure your dedicated compute capacity today. Early access pricing available for enterprise partners.