Nvidia H200: E2E Networks becomes first company in India to procure 256 Nvidia H200 GPUs

Share This Post


NSE-listed E2E Networks on Thursday announced that it was the first company to bring NVIDIA H200 Tensor Core GPUs to the Indian market.

“We are offering leading high-performance, scalable, and resilient infrastructure powered by NVIDIA H200 GPUs. The H200 GPU is designed to accelerate the most demanding AI and HPC workloads with game-changing performance and memory capabilities. This will enable businesses to tackle more complex AI models and drive innovation across startups, MSMEs and large businesses alike,” said Tarun Dua, co-founder and managing director of E2E Networks.

“The company has raised Rs 420 crore from the market for procurement of the GPUs. It has so far received 256 H200 GPUs with more GPUs being procured soon,” Kesava Reddy, chief revenue officer of E2E, told ET.

E2E Cloud’s flagship product, TIR – an AI development studio – will be the first in India to feature H200 GPUs, giving developers access to cutting-edge infrastructure.

This will allow developers to train foundational AI models. The company anticipates the H200 GPU to become a driver for AI training in India, specifically for the training and inference of large language models (LLMs) and large vision models (LVMs).


E2E Cloud is used by startups, SMEs and large businesses from India, the Middle East, the Asia Pacific, and the US.

Discover the stories of your interest


Vishal Dhupar, managing director, Asia South, NVIDIA, said, “E2E’s expansion of its infrastructure to include NVIDIA H200 GPUs is helping to build the foundation for India’s AI-powered future, bringing powerful cloud services to enterprises and startups across the region.”Also Read: India can be intelligence capital of the world: Nvidia’s Vishal Dhupar

The NVIDIA H200 GPU cluster, interconnected with NVIDIA Quantum-2 InfiniBand networking, is engineered to advance generative AI.

It offers 4.8 TB/s of memory bandwidth and 141 GB of GPU memory capacity, and delivers up to 1.9X higher inference performance compared with NVIDIA H100 Tensor Core GPUs.

Designed to meet the growing demand for real-time AI inference, complex simulations, and other high-compute tasks, it is the first GPU to use HBM3e memory, which is faster and larger than previous generations.

Also Read: Infrastructure gap a key challenge for AI startups: Nvidia’s Vishal Dhupar



Source link

spot_img

Related Posts

Should you use a VPN browser extension? What you gain, what you give up

Virtual private networks, or VPNs, are an essential...

Northvolt Files For Bankruptcy Protection In US

Northvolt files for Chapter 11 bankruptcy protection in...

Bain-backed chipmaker Kioxia's market value set at $4.9 billion in IPO

Kioxia, backed by Bain Capital, is set to...

X app: Elon Musk says X is number one news app on App Store in India

Elon Musk on Friday claimed that X, formerly...

Sonos speakers up to 25% off in pitch-perfect Black Friday deals

Looking for the sweetest speaker deals this coming...
spot_img