Google’s 7th-generation AI accelerator chip, ‘Ironwood’. (Google) |
<이미지를 클릭하시면 크게 보실 수 있습니다> |
Google LLC unveiled a new artificial intelligence (AI) model and AI semiconductor on Wednesday, a move aimed at outpacing competitors and reducing its reliance on Nvidia Corp. by strengthening its own chips.
Google Cloud, the company’s cloud computing division, held its annual event, Next 2025 in Las Vegas, the United States, that day.
At the event, the company introduced Gemini 2.5 Flash, a more accessible version of its latest large language model (LLM), Gemini 2.5, which was unveiled in March 2025.
According to Google, Gemini 2.5 Flash automatically adjusts processing time based on the complexity of the question, enabling quicker responses for simpler queries. This allows for fast service at a lower cost.
Google also unveiled its next-generation AI accelerator chip, Ironwood, which is its seventh-generation Tensor Processing Unit (TPU).
According to Google, Ironwood delivers over 10 times the performance of its predecessor, the TPU v5p, which was released in 2023.
It is equipped with 198GB of high bandwidth memory (HBM), allowing it to handle larger models and datasets, reducing the need for frequent data transfers and enhancing performance.
Samsung is reportedly supplying the HBM to Google via Broadcom, Google’s chip developer.
As Nvidia, which dominates more than 80 percent of the AI accelerator market, shifts its focus from training to inference, Google appears to be strategically developing inference-optimized chips to reduce its dependence on Nvidia, according to sources.
Google also introduced a new communication protocol agent called Agent2Agent for interactions between AI agents.
It also announced support for the increasingly popular open-source protocol MCP.
He added that AI adoption is accelerating and more than 4 million developers now use Gemini.
이 기사의 카테고리는 언론사의 분류를 따릅니다.
기사가 속한 카테고리는 언론사가 분류합니다.
언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.
언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.