“We are offering leading high-performance, scalable, and resilient infrastructure powered by NVIDIA H200 GPUs. The H200 GPU is designed to accelerate the most demanding AI and HPC workloads with game-changing performance and memory capabilities. This will enable businesses to tackle more complex AI models and drive innovation across startups, MSMEs and large businesses alike,” said Tarun Dua, co-founder and managing director of E2E Networks.
“The company has raised Rs 420 crore from the market for procurement of the GPUs. It has so far received 256 H200 GPUs with more GPUs being procured soon,” Kesava Reddy, chief revenue officer of E2E, told ET.
E2E Cloud’s flagship product, TIR – an AI development studio – will be the first in India to feature H200 GPUs, giving developers access to cutting-edge infrastructure.
This will allow developers to train foundational AI models. The company anticipates the H200 GPU to become a driver for AI training in India, specifically for the training and inference of large language models (LLMs) and large vision models (LVMs).
E2E Cloud is used by startups, SMEs and large businesses from India, the Middle East, the Asia Pacific, and the US.
Discover the stories of your interest
Vishal Dhupar, managing director, Asia South, NVIDIA, said, “E2E’s expansion of its infrastructure to include NVIDIA H200 GPUs is helping to build the foundation for India’s AI-powered future, bringing powerful cloud services to enterprises and startups across the region.”Also Read: India can be intelligence capital of the world: Nvidia’s Vishal Dhupar
The NVIDIA H200 GPU cluster, interconnected with NVIDIA Quantum-2 InfiniBand networking, is engineered to advance generative AI.
It offers 4.8 TB/s of memory bandwidth and 141 GB of GPU memory capacity, and delivers up to 1.9X higher inference performance compared with NVIDIA H100 Tensor Core GPUs.
Designed to meet the growing demand for real-time AI inference, complex simulations, and other high-compute tasks, it is the first GPU to use HBM3e memory, which is faster and larger than previous generations.
Also Read: Infrastructure gap a key challenge for AI startups: Nvidia’s Vishal Dhupar