For two decades, the Indian data centre industry has been built around raised-floor air cooling: efficient, well-understood, and adequate for compute densities of 5–15 kW per rack. That model is now hitting a wall.

Modern AI accelerators — NVIDIA’s H200, B200 and successors; AMD’s Instinct MI300/MI350 series — operate at thermal design powers exceeding 500W per chip. Packed into eight-GPU server platforms, a single rack can demand 60–120 kW of cooling capacity. Traditional CRAC/CRAH units cannot keep pace.

The investment wave

Industry estimates project ₹23,500 crore (~$2.8 billion) in cooling infrastructure spend over the next five years across Indian data centres. That capital flows into three broad categories:

India’s overall data centre capacity is projected to expand from approximately 1.5 GW today to 3.5 GW by 2030, with the cooling sub-market growing at a 26.3% CAGR through that window.

What this means for enterprise buyers

For any organisation evaluating on-premise AI compute — and India’s tightening data localisation regime is pushing more workloads on-premise, not fewer — cooling is no longer a downstream procurement decision. It is upstream of GPU selection, rack design and floor planning.

Sheeltron’s AI infrastructure practice has been engaging with clients on this shift for over twelve months, integrating cooling design into the same workflow as GPU procurement, networking and managed operations. The engagements span DLC-ready rack designs for greenfield deployments and hybrid retrofits for established data centres.

The thesis is simple: a 500W GPU is only useful at 500W if the rest of the stack can move 500W of heat. Increasingly, only liquid can.

Indian players responding

The shift is already producing measurable activity among Indian thermal and infrastructure specialists. Blue Star is expanding its data-centre segment with in-house liquid-cooling models slated for release within twelve months. Refroid Technologies has partnered with Technavious Solutions to build liquid-cooling infrastructure purpose-built for AI data centres in India. Both signals point to a maturing domestic supply chain — historically dominated by global vendors — for the categories that will carry the projected capacity expansion.

For enterprises planning AI deployments in 2026 and beyond, the right question is no longer which GPU? It is which thermal envelope can my facility actually sustain? Sheeltron’s position is that those two answers must be designed together — and that the partners delivering them must own the lifecycle from rack design through certified end-of-life.

Source: tradebrains.in · with additional context from CRN Asia, Whalesbook, and Cervicorn Consulting industry reports