SYSTEM ONLINELOC: CDMX-52785
TEMP: 21°CPUE: 1.31UPTIME: 99.982%
INFRASTRUCTURE

AI FACTORIES & HPC

The first specialized AI Data Center in Latin America. Engineered for the massive computational demands of Large Language Models and Generative AI.

COMPUTE POWER

32.4 PETAFLOPS

Powered by 288 NVIDIA H100/B200 GPUs in NVLink Spine configuration. Delivering 16.2 PF (H100) to 32.4 PF (B200) of AI performance.

NETWORK FABRIC

128 TB/S

Spine-Leaf architecture with NVLink 4.0/5.0 providing 900/1800 GB/s per link. Intra-spine latency of just 300-500ns.

EFFICIENCY

1.31 PUE

World-class energy efficiency significantly outperforming the industry average of 1.58. Optimized for sustainable high-density computing.

TECHNICAL ARCHITECTURE

NVLINK SPINE CONFIGURATION

72 servers with 4 GPUs each, interconnected via 36 NVLink switches. This architecture creates a massive, unified memory pool of up to 9.2 TB HBM.

LIQUID COOLING

Direct-to-chip liquid cooling technology capable of handling 142.1 TR (H100) to 190.2 TR (B200) thermal loads. Cold/Hot aisle containment for maximum efficiency.

STORAGE & RETRIEVAL

2 PB NVMe high-performance storage in RAID configuration + 10 PB distributed file system for archival. Designed for massive dataset ingestion.

Datacenter Interior
FACILITY SPECS
Total Area3,500 m²
Technical Space1,200 m²
Power CapacityMulti-MW
CertificationTier III / LEED Gold

GLOBAL CONNECTIVITY

1 Tbps
AGGREGATE BANDWIDTH
100 Gbps
DIRECT CLOUD CONNECT
<50 ms
FAILOVER RECOVERY
AWS Direct Connect
Azure ExpressRoute
Google Cloud Interconnect

READY TO SCALE?

Secure your dedicated compute capacity today. Early access pricing available for enterprise partners.