SK Hynix Begins Full-Scale Mass Production of Custom SoCam2 Optimized for Nvidia Vera Rubin
10-Nanometer-Class 6th Generation DRAM-Based Module
Twice the Speed and 75% Greater Efficiency Compared to Previous Products
Expected to Resolve AI Computation Bottlenecks
SK Hynix has begun mass production of a new memory product optimized for Nvidia's next-generation artificial intelligence (AI) platform, 'Vera Rubin.' The key feature of this new server module is its ability to simultaneously increase data processing speed and power efficiency, thereby resolving the memory bottleneck issues that arise during AI computation.
SK Hynix's 10-nanometer class 6th generation (1c) LPDDR5X low-power DRAM-based SoCAM2 192-gigabyte (GB) product. SK Hynix
View original imageSK Hynix announced on April 20 that it will begin full-scale mass production of the SoCAM2 192-gigabyte (GB) product, which is based on 10-nanometer class 6th generation (1c) LPDDR5X low-power DRAM. SoCAM2 is a compression-type connector module that adapts low-power memory (LPDDR), originally used mainly in mobile devices such as smartphones, for server environments.
SK Hynix points to SoCAM2's overwhelming performance compared to existing server DRAM modules (RDIMM) as its greatest strength. The SoCAM2 192GB product, built with the 1c nanometer process, provides more than twice the bandwidth of conventional RDIMMs, while improving energy efficiency by more than 75 percent. In addition to increasing server space utilization thanks to its thin profile, it also strengthens signal integrity and enables easy module replacement through the compression-type connector.
As a result, during the training and inference processes of AI models that need to process hundreds of billions of parameters, it is possible to drastically reduce power consumption while doubling data processing speeds. The company explained that, in particular, SoCAM2 fundamentally addresses the 'memory bottleneck' that occurs when memory data supply lags behind GPU speeds during AI computation, thereby maximizing the overall system's efficiency.
This SoCAM2 product is specifically designed and optimized for Nvidia's next-generation AI platform, 'Vera Rubin.' SK Hynix has rapidly stabilized its mass production system to meet the needs of global cloud service provider (CSP) customers, with the aim of securing a leading position in the next-generation AI server infrastructure market centered on Nvidia. As the AI market is quickly shifting from a focus on model training to inference, SoCAM2 is drawing attention as a next-generation memory solution capable of running large language models (LLM) continuously at low power.
Juseon Kim, President and CMO of AI Infrastructure at SK Hynix, stated, "With the supply of the SoCAM2 192GB product, we have set a new standard for AI memory performance," adding, "We will establish ourselves as the most trusted AI memory solution provider through close collaboration with global AI customers."
Hot Picks Today
"Over 20 Times More Than Overseas": 104.5 Milli...
- "Only the Top 1% Winning Big in Stocks Smile... '300 Million Won Splurges' or '1...
- Applied Just for Skin Soothing...Study Finds It Suppresses Antibiotic Resistance
- "If an Accident Happens, Teachers Go to Jail"... The Real Reason Behind Fewer Sc...
- "Please Launch It in Korea!" After All the Hype... This Coffee Finally Arrives i...
Meanwhile, the SoCAM2 market is currently experiencing rapid growth. According to market research firms such as Omdia, the server module market, including SoCAM2, is expected to grow at an annual rate of over 60 percent, surpassing 2 billion dollars (approximately 3 trillion won) by 2027.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.