The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for KubeCon + CloudNativeCon China 2025 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.
Please note: This schedule is automatically displayed in Hong Kong Standard Time (UTC+8:00). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date." The schedule is subject to change and session seating is available on a first-come, first-served basis.
Sign up or log in to add sessions to your schedule and sync them to your phone or calendar.
In today's tech landscape, AI drives industry transformation, but enterprises face challenges in AI adoption—diverse hardware, complex workflows, data privacy. OPEA, an open-source enterprise AI platform with modular microservices, offers unified solutions for rapid deployment. Through DeepSeek inference appliance case, see how OPEA integrates with IT infrastructure, optimizes performance, and enhances reliability. Discover the new "Powered by OPEA" certification for confident AI deployment.
As AI tackles increasingly complex tasks, traditional LLMs show limitations in action decision-making and multi-step reasoning, making autonomous planning and dynamic correction key challenges. ZTE's Co-Sight agent system addresses this with a multi-agent (Plan-Actor) collaborative architecture. Its dual-level design separates planning (task decomposition, path generation) from execution, significantly reducing LLM search space. Dynamic task adjustment is achieved via DAG parallel thinking, dynamic context, guardrails, and hierarchical reflection. Co-Sight has demonstrated excellent performance on the GAIA benchmark, particularly showcasing superior stability in complex Level 2 multi-step tasks.
With the development of AI technology, the demand for computing power for large model training has accelerated the deployment of AI infrastructure. Data centers often have a "resource wall" problem between AI acceleration hardware of different generations and manufacturers, which caused the incompatibility issue of software and hardware stack. Thus, it’s a big challenge for AI infra operators to maximize resource utilization. This topic focuses on technical solutions for collaborative training using chips of different architectures, sharing the practices on solving key problems such as heterogeneous training task splitting, heterogeneous training performance prediction, and heterogeneous hybrid communication and etc.. The project has been open sourced and will be further improved with better maturity through the community.
In the AIera, enterprises need to collect more data to build high-quality AI applications, including structured data (databases, data warehouses, etc.) and unstructured data (data lakes, document libraries, real-time data, etc.). Data integrity and compliance play a key role in building AI applications, which is the value of metadata. Providing AI users with a unified data view so that they can better discover and use multi-source heterogeneous data, including data discovery, data semantics, data lineage, data permissions, etc., and managing the data life cycle in combination with enterprise governance needs to avoid resource waste and security issues, has become a strong need for every enterprise.
Apache Gravitino provides a unified API to access multiple data sources and multiple data storages, supports multiple data engines and machine learning frameworks to access data, and implements unified naming, unified permissions, unified lineage, unified auditing and other functions based on unified metadata, thereby greatly simplifying the data operation and breaking the data silos. At present, it has been adopted by companies such as Xiaomi, Bilibili, Pinterest, and Uber, and has achieved good results. This session will introduce the background, architecture, core functions and use cases of Gravitino.