Data Centers in the Age of AI with Jay Park

Data Centers in the Age of AI with Jay Park
Jay Park, Chief Development Officer of Digital Edge, discusses how Asia is at the forefront of driving growth and pioneering innovations in data center technology across the region.

Fresh out of the studio, Jay Park, Chief Development Officer of Digital Edge, explores the rapidly transforming landscape of data centres in the Asia-Pacific region. Kicking off with the story of Jay’s remarkable career in building cutting-edge data centres, we dive into the explosive growth fueled by AI and the innovative cooling and energy solutions Digital Edge is pioneering to address environmental challenges. Jay also examines the impact of advanced AI chips on next-generation data centre engineering and shares his vision of what great would look like to design efficient and sustainable infrastructure in one of the world’s fastest-growing markets.


"So if you look at this, according to this recent structural research report, the data center industry will spend 100 billion dollars, and about 50 percent of that growth will be happening in APAC. So, this is massive growth. If you look at the data centers, they have to be built where people are to better support them. But we have a new kid on the block. It's called AI servers, and it's something I have never experienced before in any industry, and this is massive. It'll do a lot of things, but it has to do data processing. So you cannot have all these data centers in, let's say, North America, have people in the APAC area grab that data, send it back to the U.S. or North America, do all the processing, and then send it out to APAC. I just don't see that happening. So, they're building the data centers closer to the users, where people are. And then you do all the processing there. The growth is going to be gigantic, and that's what we are seeing today." - Jay Park

Jay Park, Chief Development Officer from Digital Edge (LinkedIn)

Here is the edited transcript of our conversation:

Bernard Leong: Welcome to Analyse Asia, the premier podcast dedicated to dissecting the pulse of business, technology and media in Asia. I'm Bernard Leong and one of the key components to have generative AI is data center infrastructure. With me today, Jay Park, Chief Development Officer from Digital Age, will help me understand the data centre landscape in the age of generative AI. Jay, welcome to the show.

Jay Park: Good morning, Bernard.

Bernard Leong: It’s a pleasure to have someone with your expertise on the show to help us decipher the data centre landscape. As always, we love hearing the origin stories of our guests, so could you share how your career began?

Jay Park: My career began in 1986 as an electrical engineer, primarily working in the power plant and semiconductor industries. I supported these sectors until 1999 when much of the semiconductor industry moved out of the U.S. At that time, I had the fortunate opportunity to enter the data centre industry, where I've worked ever since—spanning the last 25 years.

Bernard Leong: How did you eventually become the Chief Development Officer of Digital Edge? I also understand you had a stint with Facebook.

Jay Park: Yes, after my time with Facebook, I was planning to retire in Korea. But as a native Korean, I felt something was missing from my career—I wanted to give back to Asia in a meaningful way. Then some former colleagues approached me with the idea to start a company, which felt like the perfect opportunity to make a tangible impact in the Asian market. I truly love what I do now."

Bernard Leong: Given such a long, tenured career journey, what are the interesting lessons that you can share with my audience?

Jay Park: "The data centre industry is, in many ways, similar to the car industry. Every year, new car models come out with innovative features, but in data centres, the pace of technological advancement has been much slower.

Jay Park: If you look at the server market, however, server technology has been changing rapidly. Back in 1999 and 2000, during the dot-com boom, we were seeing around 1 to 2 kilowatts per capita. Over time, this capacity grew gradually, eventually stabilizing at around 8 to 12 kilowatts per capita. But in just the last 18 months, we've seen a leap from around 10 kilowatts per capita to an astounding 130 kilowatts per capita. And this upward trend doesn’t seem to be slowing—there are even rumours of even higher-density cabinets emerging. The data centre industry and its construction practices need to catch up, understanding the demands inside the facility so we can design infrastructures that will endure for the next 30 years. Absolutely.

Bernard Leong: Wow, the energy demands are increasing so rapidly that even the infrastructure side needs constant innovation to keep up with the growing global demand. This brings us to today’s main topic: data centres in the era of AI. Since you’re here, could you provide an overview of Digital Edge, along with the company's mission and vision as a leading data centre platform?

Jay Park: First, we don’t aim to be seen as just a data centre co-location company; I prefer to think of us as a data centre technology company. There are many challenges today, especially around power efficiency and reducing water usage.

We also face issues with power capacity, and there are numerous moving parts in the industry right now. As a company, our mission is to bridge the digital divide between developed and developing countries. We’re working to close that gap so that everyone can access the same level of digital infrastructure.

With financial backing from Stone Peak—totalling one billion dollars—we’re fortunate to have the resources to expand our data centre network freely. Since our founding in 2020, we’ve grown to 17 data centres across the Asia-Pacific region with over 400 employees. Looking forward, we aim to achieve 800 megawatts of capacity by 2028, though my personal ambition is to reach one gigawatt. We’re scaling quickly and innovating within the data centre industry, positioning ourselves as leaders. Our hope is that others in the field will watch our progress and follow our lead.

Bernard Leong: To set the context, what was the total market opportunity that Digital Edge is specifically targeting in the data centre business across the Asian market? I understand that you are more global based on my research of how Digital Edge operates.

Jay Park: According to a recent structural research report, the data centre industry is projected to spend $100 billion, with about 50% of that growth occurring in APAC. This is massive growth. Data centres need to be built close to where people are to support them effectively, especially with the emergence of AI servers, which I’ve never seen anything like before in this industry.

AI servers require extensive data processing, and it’s simply not practical to process all this data in North America, and then send it back to users in APAC. Data centres are therefore being built closer to users in the APAC region, where processing can occur locally. The growth here is going to be immense, and that’s exactly what we’re witnessing today.

Bernard: I completely agree, especially given my background as the former Head of Artificial Intelligence and Machine Learning for Amazon Web Services in Southeast Asia. Hyperscalers like AWS are increasingly active in the region, with data centres already established in Malaysia and Indonesia, and with Thailand and Vietnam likely coming online soon. Since I typically work on the end-user side, helping customers utilize cloud infrastructure with AI applications that relies on these data centres, I’m particularly interested in understanding the supply chain involved. Could you provide a high-level overview of the data center supply chain—from initial planning and construction to full operational status?

Jay Park: In the past, data centres were typically much smaller. A campus with 10, 20, or even 30 megawatts was considered large. But with the rise of AI, no one is interested in a 30-megawatt campus anymore. Now, projects often start with 5 to 10 megawatts, with plans to scale up to 50 or even 100 megawatts, so the scale has grown exponentially.

"Since [the] COVID [pandemic], however, the supply chain has been significantly impacted. Construction demand surged, but manufacturing companies struggled with part availability, causing major equipment deliveries to stretch to as long as two years. To address these delays, we’re ordering long-lead items much earlier—sometimes even before breaking ground—and reserving manufacturing slots where possible.

We’re also using skid-mounted equipment pads, which allow us to assemble interconnected equipment on a single platform. While the building’s foundation and structure are under construction, we can prepare these equipment skids in parallel, cutting down on project time.

To streamline procurement further, we’ve adopted a standardized, 'cookie-cutter' design, where we repeat the same layout and equipment across projects. This way, procurement starts well before construction begins, helping us stay on schedule despite supply chain challenges.

Bernard Leong: If I can dig deeper into this: you mentioned that early data centres required around 10 kilowatts, but today you're aiming for a capacity of 1 gigawatt. How does this shift impact the structural and energy requirements for data centers on such a large scale?"

Jay Park: The difference is massive. With power needs at that level, you can’t simply rely on utility feeders; you have to build an on-site substation, bringing in high-voltage power and then stepping it down. A substation requires a larger land area, which makes it nearly impossible to build these large-scale data centres in metropolitan areas—not only due to power limitations but also because residents don’t want high-voltage lines running through their neighbourhoods. So, large data centres often need to be built further out.

Bernard Leong: That’s why big tech companies like Google and Amazon are exploring options like SRMs (small-scale nuclear reactors) or situating data centres right next to power plants to get a direct energy feed. Microsoft is even considering options like Three Mile Island.

Jay Park: Exactly. Many people don’t realize just how much power 1 gigawatt represents—it can supply about 250,000 households or a million people, like a large city. Managing this kind of power demand with only a few utility feeders just isn’t feasible, so building substations is essential.

Another challenge is the nature of power consumption in AI-driven data centres. During AI processing, power draw can remain steady but then spike unpredictably for a few seconds before settling again, which complicates capacity planning. Data centre users must buy enough capacity to handle these surges, whether or not they consistently use that power.

To address this, we’re developing an external power-shaving system. This setup would allow users to purchase less peak capacity by smoothing out demand spikes, benefiting both users—who save on excess capacity—and utilities, which benefit from a steadier power draw.

Bernard Leong: How does the concept of modular data centre design evolve, and what advantages does it offer for building efficient and scalable infrastructure, from your perspective?

Jay Park: Many companies are now exploring modular data centres, and I have experience constructing them with a previous company. The main benefits are speed and efficiency. You can build the core of the data centre faster and with less material since everything is assembled in the factory, which also enhances quality control. However, there are some limitations. Transporting these modules from the factory to the site comes with height restrictions, which can limit the design.

One challenge with modular data centres is flexibility over time. If you build a traditional data centre, you can future-proof it by incorporating higher ceilings or additional space, anticipating increases in density. The shell or exterior construction is relatively inexpensive compared to the mechanical, electrical, and plumbing (MEP) systems, so investing in a bit more space initially can pay off. Density demands are only increasing—rumour has it that we could soon see densities as high as 300 kilowatts per capita. NVIDIA’s current Blackwell cabinet already draws 130 kilowatts per capita, and as you can imagine, that requires extensive cabling, fibre, and even liquid cooling, which fills up the ceiling and infrastructure quickly.

Modular data centres, while efficient, lock you into a set configuration, making it challenging to adapt or expand later as density increases. So, while the modular approach has clear advantages—faster construction, material efficiency, and higher quality from factory assembly—I am cautious about the lack of flexibility for future growth.

Bernard Leong: With the rising demand for faster data processing and delivery, how does Digital Edge balance efficiency, scalability, and cost in data centre design? As a data centre technology company, I imagine there are some necessary trade-offs.

Jay Park: For us, it starts with understanding exactly what’s going into the data centre. The more we know about the internal components, the better we can design the infrastructure around them. Density is increasing significantly, so we’re moving well beyond air cooling.

Currently, air cooling works through a water-to-air system: water is chilled, passed through coils, and then cools the air. But air cooling is becoming less viable, and we’re shifting towards liquid cooling, which is far more energy-efficient. Liquid cooling doesn’t require ultra-cold water; in fact, we can use local water temperatures to cool components like GPUs directly. By eliminating the need to convert water to air, our power usage effectiveness (PUE) improves, boosting overall efficiency.

When we talk about liquid cooling, I’m referring to more advanced systems than traditional setups. Today's liquid cooling typically uses a chilled water loop and a condenser water loop, where the condenser water is cooled in an open cooling tower. We’ve moved to a closed-loop system, using the condenser water directly to cool the cabinets, which removes the need for additional energy transformations.

Bernard Leong: That’s much more efficient.

Jay Park: Exactly. Reducing energy conversions, whether electrical or mechanical, is key to optimizing efficiency. Fewer conversions mean less energy loss, so that’s where our focus is moving forward.

Bernard: That's a great point. What's one thing you know about data centre engineering in Asia-Pacific, or even globally, that few people are aware of?

Jay Park: When I entered the Asia-Pacific market about four and a half years ago, I noticed they were using very outdated technology. Many companies still cling to old methods, and it’s a challenge to break that barrier. At Digital Edge, we built our first greenfield data centre in Manila and deployed an innovative cooling technology, achieving a PUE below 1.2. In a hot, humid climate like Manila’s, where achieving this efficiency level is almost unheard of, we’re showing the industry what’s possible.

We’re also transparent about our data—we’re sharing it with the industry instead of keeping it hidden. This openness is essential for progress. Personally, I’m not excited by simply following in others' footsteps; like the auto industry, where each new model brings improvements, I believe data centres need that same mentality shift, especially here in the Asia-Pacific region."

Bernard Leong: Could the approach you took in Manila be applied further south in similarly tropical regions?

Jay Park: "Yes, definitely. Our technology, called StatePoint Liquid Cooling (SPLC), was developed with Facebook around a decade ago. We rigorously tested it, even in harsh environments, and Facebook has since deployed it in their Singapore data centre, achieving a PUE of 1.19—similar to what we achieved in Manila.

To explain SPLC simply, I like to use an analogy. When I was a child in Korea, we didn’t have refrigerators, so my family would store drinking water in a clay pot. The clay’s porous structure allowed for slight evaporation, which kept the water cooler than tap water. SPLC works similarly, with a membrane that acts like a clay pot, containing tiny pores that allow evaporation without leakage. When hot air passes through, it cools the water, creating a significant temperature drop.

In Manila, SPLC performed so well during commissioning that we struggled to even turn on the chillers—the SPLC handled nearly all the cooling. We keep a 'pony chiller' on standby to assist if SPLC needs support, but in our tests, SPLC alone maintained our target temperatures, achieving a PUE of 1.193.

Bernard Leong: Another fascinating aspect of data centres is the role of chips, especially given the rapid advancements in AI. Since I work in the AI space, could you explain how NVIDIA’s AI chips, or others like Tenstorrent or Groq, impact data centres? How critical are they to your business?"

Jay Park: As I mentioned earlier, understanding what goes into the data centre is essential. The rise of AI chips like NVIDIA’s has been extremely disruptive, especially in terms of power density. Previously, we expected power demands to increase gradually, from 8 to 10 kW per capita to perhaps 20 or 30 kW per capita over time. But instead, the demand shot up—from 10 kW to an astonishing 130 kW per cabinet almost overnight. This has left the industry scrambling to adapt.

There’s also ongoing debate around cooling methods. For instance, some setups require PG 25 cooling, while others need direct water cooling. NVIDIA manufactures the chips, but their servers are built by a variety of companies, each with different cooling specifications. This means one client might require direct cooling while another prefers PG 25, each with distinct operating temperatures.

In addition, transitioning from existing water-to-air cooling systems to these more complex methods adds another layer of complexity. Ideally, we’d have a standardized operating temperature, but with different manufacturers specifying varying temperatures, that’s unlikely. So we’re constantly navigating these challenges to accommodate different setups within a single data centre environment.

Bernard Leong: Right, so with each new chip generation, there are different requirements for energy efficiency and cooling. It seems like every advancement changes the cooling approach needed for data centres.

Jay Park: "Exactly. In the past, even as power density increased, we could still manage cooling with air. But now, air cooling alone can’t handle the heat generated by AI servers, so we’re scrambling to adapt. Today, we see many companies offering different types of Cooling Distribution Units (CDUs), each with their unique methods. This creates an additional challenge for colocation and hyperscaler providers, who need to remain flexible enough to support various cooling requirements for different customers.

Bernard: Given these advancements, what has changed your perspective on cooling technologies in data centre engineering over the past 12 months?

Jay Park: For me, it’s clear that we need to move to liquid cooling systems. I strongly recommend a closed-loop system, whether through StatePoint Liquid Cooling (SPLC) or hybrid cooling. This approach uses condensed water directly to cool the AI chips, which I now see as essential for efficiently managing the intense heat generated by modern AI servers.

Bernard Leong: You've been at the forefront of data centre design, from your time at Facebook to now at Digital Edge. What are some of the key principles for designing data centres for maximum efficiency?

Jay Park: One of the most important principles is minimizing energy transformation steps. For instance, in electrical systems, the greatest energy loss often occurs at the UPS (Uninterruptible Power Supply). Converting AC to DC and back to AC can lead to energy losses of 6–10%. Even with high-efficiency UPS options available today, there’s still substantial energy loss. The same applies to mechanical systems: cooling processes often involve several conversions, like changing water to air and back again, and each transformation step reduces overall efficiency. Reducing these steps is essential for achieving optimal efficiency.

Bernard Leong: To achieve this efficiency, what innovative approaches have you implemented at Digital Edge to reduce energy consumption while maintaining performance?

Jay Park: I’ve focused on two major innovations. During my time at Facebook, I developed a UPS-less power distribution system, which eliminated the need for a centralized UPS. This design, now widely used by large companies, is part of the Open Compute Project (OCP) server design.

At Digital Edge, we’ve deployed the StatePoint Liquid Cooling (SPLC) system in hot, humid environments like Manila and Jakarta. This system has enabled us to achieve a PUE below 1.2—a level of efficiency that’s virtually unheard of in the APAC data centre industry.

Bernard Leong: Water usage in cooling systems and overall energy consumption are significant environmental challenges for data centres. What solutions and technologies is Digital Edge exploring to make meaningful progress toward ESG goals?

Jay Park: I’m glad you asked because this is a subject I feel passionately about. Addressing water usage in data centres requires a holistic approach. Often, we only consider water used directly by the data centre, but we need to look upstream—specifically at the water required for power generation. According to IEEE, producing just one kilowatt-hour of power (the amount needed to keep a 100-watt light bulb on for an hour) requires about 95 litres or 25 gallons of water.

Now, think about a 100-megawatt data centre campus, which is a fairly standard size today. If we improve our Power Usage Effectiveness (PUE) from, say, 1.5 to 1.4, we could save enough water annually to fill 1,500 Olympic-sized swimming pools. This level of water savings underscores the importance of incremental PUE improvements—not only for energy efficiency but also for water conservation.

Imagine lining up 1,500 Olympic-sized swimming pools side by side—it would look like a lake, right?

Bernard Leong: I can see why you’re moving toward a closed-loop system to prevent water waste.

Jay Park: Exactly. While complete elimination of water waste isn’t possible, systems like SPLC (StatePoint Liquid Cooling) can reduce water usage by up to 40%, depending on the location. For a 100-megawatt data centre, that’s equivalent to saving nearly half of those 1,500 Olympic pools’ worth of water each year. That’s a significant amount.

And remember, the type of cooling system we select can have an even greater impact. SPLC, for instance, delivers substantial savings. But water usage isn’t only a concern at the data centre level; it’s also a huge factor in power generation. Improving PUE has a ripple effect on water conservation, especially upstream at power plants, which is often where the biggest water savings can be achieved.

Bernard Leong: There have been so many advancements in data centre technology recently. Which ones excite you the most, and how do you see these technologies shaping the future of the industry?

Jay Park: Water-saving technologies are incredibly promising, and while SPLC (StatePoint Liquid Cooling) is a great innovation, I believe it’s only the beginning. I hope the industry continues to build on this foundation or even develop better solutions. Beyond cooling systems, I’m also really excited about what we’re developing on the electrical side. For instance, the Hybrid Super Capacitor (HSC) that we co-developed with a company in Korea offers a safer alternative to lithium-ion batteries, which can overheat due to chemical reactions and pose a fire risk.

The HSC operates differently. Since there’s no chemical reaction, it doesn’t generate heat or risk combustion. Additionally, it doesn’t require a temperature-controlled environment, so it can be used in a wide range of conditions. We’ve already developed it to replace traditional UPS batteries, and now we’re going further. This device will support power shaving, which reduces energy demand spikes. Unlike batteries that take hours to recharge, the HSC recharges in minutes—sometimes even seconds—allowing it to handle sudden spikes effectively and make our systems more resilient.

Bernard Leong: That’s impressive, managing both backup and power shaving in one device.

Jay Park: Exactly, two functions in one. We’re not far from making it widely available, and I intend to share this technology openly. Too often, industries like nuclear or aviation tend to keep these kinds of innovations to themselves. But in our field, sharing advancements that improve energy efficiency and reduce environmental impact benefits everyone. It’s not just a choice—it’s a duty to make a better world for future generations.

Bernard: So, what's one question you wish more people would ask you about data centre engineering?

Jay Park: I’d say it’s about being bold—taking calculated risks and not waiting for the perfect moment because it will never come. I encourage people to step out of their comfort zones and try new things. With a thoughtful approach, those bold moves are often what drive progress.

Bernard: Great point. For my closing question: what does ‘great’ look like for Digital Edge as it builds and manages data centres?

Jay Park: As I mentioned earlier, we aspire to be known as a data center technology company, leading the industry forward. We’ll keep developing new products, refining systems, and improving metrics like PUE. We’re committed to openness—sharing what we learn to help everyone grow together. For us, that’s both our mission and our responsibility.

Bernard Leong: I wish you all the best with that vision. I've learned so much from this conversation about the intricacies of data centre engineering. To wrap up, could you share any recent recommendations that have inspired you?

Jay Park: Actually, on a recent flight, I watched a Discovery Channel program about next-generation power sources. Scientists are working on a project where satellites equipped with solar panels are positioned outside Earth’s orbit, capturing sunlight 24/7 and transmitting this clean energy back to Earth via electromagnetic waves. It doesn’t matter if it’s cloudy or nighttime—it’s an uninterrupted energy source. When I heard about this, I thought it was the coolest concept and hope I get to see it realized in my lifetime.

Bernard Leong: With the advances in technology, we may well see it. How can my audience connect with you?

Jay Park: "I often speak at conventions and conferences, so you can find me there, or reach out through our company’s contact page. If anyone has something interesting to share, I’d love to learn more."

Bernard Leong: Thank you, Jay. It’s been a pleasure having you on Analyze Asia. I’ve gained so much from this conversation, and I’m sure our audience has too. I look forward to chatting with you again.

Jay Park: Thank you for having me. I truly appreciate it.

Podcast Information: Bernard Leong (@bernardleongLinkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraigLinkedIn). Here are the links to watch or listen to our podcast.

Comments