by Park Joonyi
by Kwon Hyeonji
Published 22 Apr.2026 11:00(KST)
Updated 22 Apr.2026 16:26(KST)
Global big tech companies are making frequent visits to Korean power equipment firms. As the pace of building artificial intelligence (AI) data centers is far outstripping the expansion of existing power grids, securing power has become a matter of survival for their businesses. Now, big tech firms are resorting to all possible means-personally bringing power grid blueprints to Korean companies and pleading for supply guarantees.
According to industry sources, Amazon Web Services (AWS) recently signed a power infrastructure supply contract worth 170 billion won with LS ELECTRIC, after holding several meetings with domestic power companies since the end of last year. As annual data center investments in the United States have grown to 400 trillion won, competition among big tech companies to secure power is intensifying. AWS’s move is thus seen as a strategy to gain an edge in power efficiency. The race to reduce power loss and increase density through innovative design is heating up.
This trend is accelerating further. In particular, NVIDIA has begun demanding more radical design changes to overcome the limitations of conventional alternating current (AC) systems. As of April 22, industry insiders report that NVIDIA recently asked leading Korean power equipment manufacturers to design data center infrastructure based on 800V direct current (DC). This is a measure to minimize power conversion losses in the face of explosive power consumption from AI servers. It is reported that NVIDIA is currently engaged in behind-the-scenes discussions with Korean companies for specific data center collaborations.
Jensen Huang, CEO of NVIDIA, held a press conference on March 17 (local time) at the Hilton Signia Hotel in San Jose, California, USA. Photo by Reuters.
원본보기 아이콘At present, most power facilities-including data centers-are built on alternating current systems. Industrial voltages also vary: in Asia, 380-400V is typical, while in the United States, systems range from 208V to 600V.
Looking at the flow of power, the structure is even more complex. Incoming AC power is first lowered in voltage via a transformer, then converted to DC and stored in batteries by an uninterruptible power supply. After that, it is converted back to AC before being delivered to servers, and once again converted to DC inside the server’s power supply unit.
The problem is that this repeated conversion between AC and DC at each stage results in a 2-3% power loss per step. Across the entire system, this cumulative inefficiency is significant. NVIDIA is seeking to overhaul the current structure by proposing DC-based power design from the supply stage as a way to reduce these losses.
The key challenge is compatibility with existing infrastructure. If the system shifts to an 800V DC standard, connection efficiency with high-voltage AC transmission lines-which handle long-distance power transport-will inevitably decline. Comprehensive redesigns will be required for substations, distribution facilities, and the internal power structure of data centers. The current distribution grid is fragmented with various voltage systems such as 7.6kV, 13.2kV, and 38kV, making unification difficult. For this reason, the power industry has so far maintained an AC-centric structure, accepting inefficiencies.
However, NVIDIA’s demands are rapidly shifting the landscape. As NVIDIA, a leader in the AI semiconductor market, sets new standards, the power industry is moving to align with them. The long-discussed transition to DC is now advancing into concrete investments and technology development. One industry official noted, "There had been discussions about switching to DC, but progress was slow. Now, with NVIDIA pointing the way, facility investments and technology development are beginning to move in tandem."
The power industry sees this trend as more than just a technical shift-it signals a transformation of the entire power infrastructure. As AI demand surges, competition to improve the power efficiency of data centers is intensifying, and DC-based high-voltage systems are expected to become the new standard. According to the International Energy Agency, global data center power consumption is projected to increase from 415 TWh in 2024 to 950 TWh in 2030 and 1,300 TWh in 2035. As electricity usage rapidly grows, there is increasing support for introducing DC-based systems to minimize losses.
However, there are practical concerns that establishing a DC ecosystem will require aggressive technology development and investment by the power equipment industry. Bae Chaeyoon, Head of Core Technology Research at LS ELECTRIC, said, "The data center market holds immense potential for the power industry, but infrastructure restructuring is needed, so it’s still difficult for many companies to enter. From a global perspective, it’s not enough for just one company to do well; several firms need to participate for the market to grow."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.