컨텐츠 바로가기

    03.06 (금)

    [PRNewswire] Huawei Launches Its AI Data Platform

    댓글 첫 댓글을 작성해보세요
    주소복사가 완료되었습니다
    연합뉴스

    Xie Liming, the President of the Flash Storage Domain of the Huawei Data Storage Product Line, launches the AI Data Platform

    <이미지를 클릭하시면 크게 보실 수 있습니다>


    -- They Aim to Power Faster AI Adoption for Enterprises

    BARCELONA, Spain March 5, 2026 /PRNewswire=YONHAP/ -- At the Huawei AI DC Innovation Forum at MWC Barcelona 2026, Huawei unveiled its AI Data Platform, designed to address the key challenges in adopting AI agents and strengthen the data foundation for enterprise digital and intelligent transformation.

    AI agents now lie at the heart of transformation. Yet, despite having massive amounts of data, enterprises still struggle to deploy AI agents at scale due to multiple challenges, including delayed knowledge acquisition and low retrieval accuracy, inefficient inference in long-sequence and multi-turn interaction scenarios, and the lack of task memory and experience accumulation. These gaps keep most AI agents confined to the demonstration stage, far from being ready for production-level enterprise applications.

    In direct response to these shared challenges, Xie Liming, the President of the Flash Storage Domain of the Huawei Data Storage Product Line, introduced the AI Data Platform. It integrates the knowledge base, KV cache, and memory bank, and is coordinated by UCM. This platform enables enterprise AI agents to move beyond demonstrations and become real production tools.

    - Knowledge generation and retrieval with real-time, high-accuracy multimodal knowledge retrieval for agents

    This technology uses knowledge bases to continuously detect source data changes and convert raw data into knowledge in near real-time. It converts multimodal data into high-accuracy knowledge through multimodal lossless parsing and token-level encoding, with a retrieval accuracy of over 95%.

    - KV cache for inference acceleration, using historical memory data to improve the inference efficiency of agents

    The intelligent tiering and management of the KV cache greatly reduce repeated computing during inference for lower inference latency, improve inference throughput and user experience, and deliver strong performance support for long-sequence and complex agent inference.

    - Memory extraction and recall with personalized and continually summarized memory for agents

    This technology uses memory banks to accumulate working memory and experiential memory during AI agent interaction. It supports memory backtracking and multi-agent collaborative learning to continuously optimize inference accuracy and efficiency, making models smarter with use.

    Looking ahead, Huawei will strengthen its investment in AI data infrastructure, empower industry transformation through ongoing innovation, and work with global customers and partners to drive broader AI adoption across more fields, unlocking the full potential of data.

    Source: Huawei

    [※ Editor's note = This PRESS RELEASE was provided by the news provider, and Yonhap has not edited the content in any way, nor does it reflect the editorial direction of Yonhap.]

    (END)
    기사가 속한 카테고리는 언론사가 분류합니다.
    언론사는 한 기사를 두 개 이상의 카테고리로 분류할 수 있습니다.