컨텐츠 바로가기

이슈 인공지능 시대가 열린다

Google develops next-generation AI chip with enhanced performance

0
댓글 첫 댓글을 작성해보세요
주소복사가 완료되었습니다

Google’s 7th-generation AI accelerator chip, ‘Ironwood’. (Google)

<이미지를 클릭하시면 크게 보실 수 있습니다>


Google LLC unveiled a new artificial intelligence (AI) model and AI semiconductor on Wednesday, a move aimed at outpacing competitors and reducing its reliance on Nvidia Corp. by strengthening its own chips.

Google Cloud, the company’s cloud computing division, held its annual event, Next 2025 in Las Vegas, the United States, that day.

At the event, the company introduced Gemini 2.5 Flash, a more accessible version of its latest large language model (LLM), Gemini 2.5, which was unveiled in March 2025.

According to Google, Gemini 2.5 Flash automatically adjusts processing time based on the complexity of the question, enabling quicker responses for simpler queries. This allows for fast service at a lower cost.

Google described Gemini 2.5 Flash as a balanced model suited for large-scale scenarios like customer service or real-time information processing and ideal for virtual assistants where efficiency at scale is critical.

Google also unveiled its next-generation AI accelerator chip, Ironwood, which is its seventh-generation Tensor Processing Unit (TPU).

Ironwood is optimized for inference tasks and designed to power LLM services for a large customer base. It effectively supports recent LLM features such as Mixture of Experts (MoE) and advanced reasoning capabilities.

According to Google, Ironwood delivers over 10 times the performance of its predecessor, the TPU v5p, which was released in 2023.

It is equipped with 198GB of high bandwidth memory (HBM), allowing it to handle larger models and datasets, reducing the need for frequent data transfers and enhancing performance.

Samsung is reportedly supplying the HBM to Google via Broadcom, Google’s chip developer.

With supercomputers based on Ironwood, Gemini 2.5 Flash is expected to deliver inference services at competitive costs.

As Nvidia, which dominates more than 80 percent of the AI accelerator market, shifts its focus from training to inference, Google appears to be strategically developing inference-optimized chips to reduce its dependence on Nvidia, according to sources.

Google also introduced a new communication protocol agent called Agent2Agent for interactions between AI agents.

It also announced support for the increasingly popular open-source protocol MCP.

“We shipped more than 3,000 product advances across Google Cloud and Workspace in 2024,” Google Cloud CEO Thomas Kurian said.

He added that AI adoption is accelerating and more than 4 million developers now use Gemini.
기사가 속한 카테고리는 언론사가 분류합니다.
언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.