Google's AI-Focused Semiconductor TPU
Reaping the Rewards of Over a Decade of Investment
Growing Dependence on Korean HBM, Like GPUs
Google's Tensor Processing Unit (TPU) is emerging as a strong competitor to Nvidia's Graphics Processing Unit (GPU). Originally, the TPU was an artificial intelligence (AI) training and inference chip used exclusively within Google. It has now shed its image as a mere complement to the GPU and has become a true rival. While the value of the TPU has only recently come into the spotlight, Google's entry into the AI computer chip market dates back a decade.
Google TPU Emerges as a Rival to Nvidia GPU
Anthropic, the U.S. startup famous for its AI chatbot "Claude," signed a computer infrastructure agreement with Google on November 23 (local time). The company agreed to lease a TPU data center with a capacity of 1 gigawatt (GW) by next year. Subsequently, reports emerged that Meta is also in discussions with Google regarding a TPU purchase agreement. According to U.S. IT media outlet "The Information," Google is currently aiming for a market share in the 10% range for AI accelerators. Until now, Nvidia GPUs have dominated 90% of the AI accelerator market, but for the first time, a meaningful competitor has appeared.
Although Google has been a pioneering company in the AI industry, its business areas have been limited to services and software. The secret that allowed Google to surpass formidable competitors like AMD in AI hardware lies in the TPU. The TPU, Google's proprietary AI accelerator, has been released up to its seventh generation as of early this year. Thanks to the TPU, Google has been able to reduce its dependence on Nvidia and expand its AI data center infrastructure more efficiently.
Google, a Leading AI Semiconductor Company, Reaps the Rewards of Over a Decade of Investment
In fact, Google has also been a leading player in the AI hardware field. While Nvidia began developing the "General Purpose GPU (GPGPU)" for AI in 2006, it is known that Google also formed a semiconductor team for machine learning that same year. This semiconductor team released TPUv1 in 2015 as the result of its research. That was the year after Google acquired the neural network AI startup DeepMind.
In 2016, when DeepMind developed the Go-playing AI "AlphaGo" and competed against Lee Sedol, Google used a combination of Nvidia GPUs and TPUs. However, after that, Google began to increase the share of TPUs. TPUv1 was developed using a 28-nanometer (nm) process, commonly referred to as "legacy semiconductors," and its performance was more complementary than a replacement for GPUs. Nevertheless, Google continued to invest in its own semiconductors. As a result, the TPU has undergone seven design improvements up to early this year, evolving into a state-of-the-art chip on par with GPUs. Now, together with the ARM-based central processing unit (CPU) "Axion," it completes Google's silicon ecosystem.
Systolic Array Semiconductor Specialized for AI Computation
The TPU is a custom-designed (ASIC) semiconductor created solely for deep learning acceleration. While the CPU handles most computer work and the GPU accelerates graphics processing and other advanced functions, an ASIC is a chip specialized for only one specific task. In the case of the TPU, it is equipped with thousands of "accumulators" optimized for AI matrix multiplication operations. Google refers to this design as the "systolic array architecture."
As a result, the TPU lacks the versatility of CPUs and GPUs, but when it comes to AI processing tasks, it is unrivaled. In fact, in the past, the TPU was the only chip to match Nvidia GPUs in both raw performance and performance-per-watt in the "ML Commons" benchmark test, which is dedicated to AI accelerators.
Another weapon of the TPU is the AI called "AlphaChip." Developed by DeepMind and unveiled in September last year, it is responsible for the foundational design of computer chip substrates. By actively utilizing AI in chip design, Google can reduce both the cost and time required for new semiconductor development.
Like GPUs, TPUs Are Increasingly Dependent on Korean HBM
The rise of the TPU is likely to present another opportunity for domestic semiconductor companies such as Samsung Electronics and SK Hynix. This is because, like GPUs, TPUs are increasing their reliance on high bandwidth memory (HBM) as their performance improves.
The current 7th generation TPU, "Ironwood," has 192 gigabytes (GB) of HBM storage capacity. Each chip uses eight units of Hynix's 5th generation HBM, "HBM 3E 8-High (24GB)." This means the HBM capacity has been expanded sixfold compared to the previous 6th generation TPU.
According to Merrill Lynch Global Research at Bank of America (BofA), Hynix is already the exclusive supplier of HBM 3E to Google and is expected to supply the next-generation HBM 4 for Google's 8th generation TPU, which will be released next year.
Hot Picks Today
Tried Saying "Please" to Be Polite... But Provo...
마스크영역
- After Soaring Sevenfold from 60,000 to 420,000 Won, Everything Is Getting Pricie...
- "How Can I Buy a House for My Son?"... Daiso's 5,000-Won Item Flooded with Inqui...
- Jewelry Store Owner Murdered in Bucheon... Personal Information of '42-Year-Old ...
- "Black, Heavy Bodies Charge Through Local Playgrounds"... Wild Boar Sightings Re...
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.