Original Article: https://eepower.com/industry-articles/orchestrating-white-and-gray-space-to-maximize-ai-compute/
Author: Wannie Park, PADO
In 1990, Tim Berners-Lee launched the World Wide Web, paving the way for the modern digital age. Yet, initial adoption was remarkably slow. Three years later, barely 2% of Americans were online. Contrast this with ChatGPT, an equally seismic invention that brought a powerful emergent technology into the mainstream almost overnight. In just over three years since its launch, 62% of Americans are now regularly using AI tools, according to the Pew Research Center.
The rapid diffusion of AI into nearly every facet of our professional and personal lives has been unprecedented and largely unexpected. While billions are invested in new data center construction—with trillions more forecasted to meet this insatiable demand—serious questions about compute and power capacity remain.
Put simply, new data centers cannot be built fast enough to keep pace. To meet the soaring demand for AI, the industry must get creative about "finding" capacity.
I use the word finding advisedly. The most significant lever for generating more capacity already exists, yet it remains largely ignored. It sits idle in facilities across the country, waiting to be activated to solve the current capacity crisis. This solution lies in addressing the underutilization of GPUs by bridging the gap between the "white space" of IT and the "gray space" of infrastructure—including cooling, power distribution, and airflow management. By coordinating these two realms, operators can unlock latent capacity without requiring immediate CAPEX investments or additional power draws.

White Space vs. Gray Space
In modern data center design, the "white space"—the GPUs and IT equipment—typically receives top billing, while the "gray space" is relegated to a supporting, legacy role. This division is misguided; achieving peak efficiency requires both environments to operate as a single, integrated system.
While calling the gray space the "brains" of the facility is an oversimplification, the metaphor holds weight in the hunt for capacity. Intelligent, precision cooling is the primary lever for expanding GPU utilization. When these infrastructure systems are nimble and properly optimized, they can synthetically generate higher compute output without risking service-level violations. By bridging these two realms, operators can maximize revenue and profitability within their existing power and hardware constraints.
The 'Time to Power' Bottleneck
Today, the entire data center ecosystem—including developers, EPCs, hyperscalers, and utilities—is scrambling to bring new facilities online while navigating an unpredictable and evolving regulatory landscape. The defining industry challenge has become "time to power."
As companies hit the wall in securing new energy sources, we expect a wave of M&A and recapitalization activity—a massive "land grab" to acquire capacity while the economics still align. While this consolidation is necessary, it is effectively a search for "recycled aluminum" in a market that needs a massive increase in raw output.
The most immediate solution, however, lies in the 5,000 to 6,000 existing data centers that already have access to power but remain underutilized. While these legacy sites may not yet match the 1.1 PUE of a modern hyperscale facility, they can be retrofitted for AI workloads far faster than a new greenfield project can be permitted and built.
To unlock this latent potential, operators must move toward a more efficient orchestration of "white" and "gray" spaces. By treating cooling, power, and compute as a single, converged system, operators can "balloon squeeze" energy from the infrastructure (gray space) and redirect it toward the compute (white space). This synthetically increases compute per megawatt, generating significantly more top-line value from every unit of energy without requiring additional power allocations.

Historically, the business model for these facilities was dictated by "Five 9s" uptime requirements. In a world where compute is the primary limiting factor, these rigid legacy standards often represent an artificial cap that forces the underutilization of GPUs to maintain compliance. Moving forward, the industry needs holistic solutions that bridge the white and gray spaces, shifting the goal from simple uptime to maximizing GPU performance and maximizing the top-line value of every allotted megawatt.
Fortifying for an AI Future
In just three years, consumer AI adoption has skyrocketed, and this trajectory is only accelerating. We are seeing strong revenue results from companies like OpenAI, where the five-year growth ceiling is being capped primarily by access to power and compute. The sense of urgency surrounding infrastructure demand is understandable, particularly given the volatile regulatory landscape. With rules proliferating yet remaining ill-defined, AI leaders and data center operators cannot afford to wait for policy to solidify—they must secure capacity immediately.
It is logical that the industry's primary response has been the rapid construction of new, high-capacity data centers. While this infrastructure is necessary for long-term growth, a single-minded focus on new construction neglects the immediate challenges of today. The most effective solution isn't always a "shiny new object"; often, it is an existing asset waiting to be optimized. Much like retrofitting a vehicle for peak performance rather than replacing it, making intelligent adjustments to current infrastructure can yield superior results more cost-effectively and more quickly.
Ultimately, this is a challenge of physical limits. Available land and access to water are diminishing, and the queue for the power required for new developments is seemingly endless. However, this bottleneck is insurmountable only if we ignore the potential of holistic orchestration.
By synchronizing industrial and IT systems, AI leaders can meet exponential demand while simultaneously conserving capital and resources. The 5,000 to 6,000 existing data centers across the country are untapped gold mines; with the right orchestration tools, they can sustain the AI revolution for years to come.