"Break Nvidia's Solo Run" AI Chip Latecomers Secure Funding to Chase After
In the artificial intelligence (AI) semiconductor market, challengers aiming to halt Nvidia's rapid dominance are intensifying their efforts. Small-scale latecomers are penetrating the market by securing investment funds and launching cheaper, more specialized products, striving to create cracks in Nvidia's monopolistic system.
According to major foreign media on the 27th (local time), small-scale latecomers recently challenging the AI semiconductor market, where Nvidia holds over 80% market share, include Cerebras, d-Matrix, and Groq. These companies are focusing on this sector, anticipating an exponential increase in demand related to AI inference as generative AI applications like ChatGPT become more mainstream. Currently, Nvidia's graphics processing units (GPUs) based on the 'Hopper' architecture are popular as products optimized for AI model training.
Latecomers such as Cerebras are concentrating on developing semiconductors that are cheaper than Nvidia's products while being specialized for running AI models. The day before, Cerebras unveiled a new 'Cerebras Inference' platform based on the plate-sized 'CS-3' chip. Citing analysis results from Artificial Analysis, the company claimed that its solution is 20 times faster than Nvidia's Hopper chip in AI inference and significantly more affordable.
Andrew Feldman, CEO of Cerebras, emphasized, "The way to beat an 800-pound gorilla is to bring a much better product to market," adding, "From experience, better products usually win. We have secured meaningful customers (from Nvidia)." Unlike Nvidia's AI chips, which require high-bandwidth memory (HBM), the CS-3 chip supports an alternative architecture embedded in the chip wafer. CEO Feldman pointed out that memory bandwidth limitations are a fundamental constraint limiting AI chip inference speed and said that combining everything into a single chip can yield results dozens of times faster.
Founded in 2019, d-Matrix has also thrown down the gauntlet in the Nvidia-dominated AI chip market. After raising $110 million in a Series B funding round led by Singapore's sovereign wealth fund Temasek, the company is seeking additional funding. According to founder Sid Seth, the goal is to raise over $200 million by the end of this year or early next year.
d-Matrix plans to launch its own chip platform, 'Corsair,' by the end of this year. Founder Seth explained that they are combining their products with open software like 'Triton,' which competes with Nvidia's 'Cuda.' Cuda is a software platform used by most AI developers for building AI applications and has effectively become the industry standard for GPU hardware. Regarding this, Seth emphasized that "developers do not like to be tied to specific tools. People are beginning to realize that Nvidia controls the learning aspect through Cuda," and highlighted the growing support for open software like Triton.
Groq, a U.S. semiconductor startup founded by Jonathan Ross, who initially developed Google's AI-dedicated chip Tensor Processing Unit (TPU), recently succeeded in raising $640 million while being valued at $2.8 billion by investors including BlackRock. Peter Herbert, co-founder of Lux Capital, said, "The desire of investors to find the next Nvidia remains unsatisfied," adding, "This is not just about chasing the latest trends. Funding is being provided to several chip startups that have been working hard in this field for about 10 years."
This company is also known domestically for entrusting advanced AI chip manufacturing to Samsung Electronics Foundry and attracting large-scale investments from BlackRock, Cisco, and Samsung. Groq's Language Processing Unit (LPU) is evaluated as being specialized for inference rather than AI training. It uses embedded memory without the HBM found in GPUs.
Meanwhile, Nvidia, which dominates the AI semiconductor market, is scheduled to announce its quarterly earnings after the market closes on the 28th. According to market research firm LSEG, Nvidia's Q2 (May to August) revenue is estimated to have increased by 112% year-over-year to $28.7 billion. This exceeds Nvidia's previously provided guidance of $28 billion. Some forecasts even suggest the revenue could reach the $30 billion range.
Hot Picks Today
"Overseas Travel Is Out, Let's Earn Daily Wages...
- Trump Evacuated After Gunshots at White House Correspondents' Dinner...Shooter A...
- "I Am Waiting"... Terminally Ill Patients Near Death Experience 'These Dreams'
- "KOSPI Surpasses 6,500, Resembling Dot-com Bubble Era... 'Just a Matter of Time,...
- No Work, No Inheritance for the Eldest... 30 Billion KRW in Shares Gifted to Sec...
Currently, the market is closely watching not only Nvidia's Q2 results but also updates related to Blackwell and Q3 guidance (earnings outlook). Analysts believe this will provide insights into future AI demand prospects. CNBC reported, "Optimistic guidance will signal that Nvidia's customers intend to continue investing in AI development," while "disappointing forecasts could reignite concerns about an AI bubble."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.