Loading…
10-11 June
Learn More and Register to Attend

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for KubeCon + CloudNativeCon China 2025 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

Please note: This schedule is automatically displayed in Hong Kong Standard Time (UTC+8:00)To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date." The schedule is subject to change and session seating is available on a first-come, first-served basis. 
Type: AI + ML clear filter
Tuesday, June 10
 

11:00 HKT

AI Model Distribution Challenges and Best Practices - Wenbo Qi & Xiaoya Xia, Ant Group; Eryu Guan, Aliyun; Wenpeng Li, Alibaba Cloud; Han Jiang, Kuaishou
Tuesday June 10, 2025 11:00 - 11:30 HKT
As the demand for scalable AI/ML grows, efficiently distributing AI models in cloud-native infrastructure has become a pivotal challenge for enterprises. The panel dives into the technical and operational strategies for deploying models at scale -- from optimizing model storage and transfer to ensuring consistency across clusters and regions. Experts from different companies and CNCF projects will debate critical questions like: How can Kubernetes-native workflows automate and accelerate model distribution while minimizing latency and bandwidth costs? How to efficiently distribute huge models sizing hundreds of GBs or TBs? What are the challenges proposed by distributed inference and the prefilling-decoding architecture? How are models updated in the reinforcement learning post-training paradigm? What role do standards like OCI artifacts or specialized registries play in streamlining versioned model delivery?
Speakers
avatar for Han Jiang

Han Jiang

Software Engineer, Kuaishou
Software Engineering from Kuaishou, previously worked in the Kubernetes ecosystem and container-related technologies. Currently, he is focused on optimizing the inference performance of large language models. 
avatar for Xiaoya

Xiaoya

Open Source Analyst, Ant Group
Xiaoya Xia is a member of the Ant Group OSPO, where she focuses on catalyzing open source success through data-driven insights. Before joining Ant Group, Xiaoya was a PhD at East China Normal University (ECNU), where she concentrated on research into open source ecosystem sustain... Read More →
avatar for Wenbo Qi

Wenbo Qi

Software Engineer, Ant Group
Wenbo Qi is a software engineer at Ant Group working on Dragonfly. He is a maintainer of the Dragonfly. He hopes to do some positive contributions to open source software and believe that fear springs from ignorance.
avatar for Eryu Guan

Eryu Guan

Software Engineer, Aliyun
Software Engineer, Aliyun
WL

Wenpeng Li

Alibaba Cloud
Tuesday June 10, 2025 11:00 - 11:30 HKT
Level 19 | Crystal Court I
  AI + ML

11:45 HKT

Defining a Specification for AI/ML Artifacts - Fog Dong, BentoML; Gorkem Ercan, Jozu; Peng Tao & Chlins Zhang, Ant Group; Xudong Wang, Paypal
Tuesday June 10, 2025 11:45 - 12:15 HKT
AI has become a prominent figure in the cloud native ecosystem and there continues to be massive adoption in this emerging field. As frameworks and approaches are introduced, a pattern has emerged which threatens the ability to manage at scale: each implementation introduces their own format, runtime, and different ways of working, fragmenting the ecosystem. On other hand, open standards are the backbone of cohesive and scalable ecosystems.

This panel discussion seeks to explore the importance of defining standards within the CNCF ecosystem, particularly focusing on AI/ML artifacts. Beyond the advantages of the standard in facilitating integration with existing cloud native tools, this conversation will delve into how the standards can serve as a foundation for innovation. Join us to understand how standardization with innovative approaches can advance the cloud native AI landscape.
Speakers
avatar for Chlins Zhang

Chlins Zhang

Software Engineer, Ant Group
Chenyu Zhang is a software engineer at Ant Group, currently mainly responsible for the development and maintenance of project harbor, and also has some experience in devops and cloud native related technology stacks.
avatar for Peng Tao

Peng Tao

Staff Engineer, Ant Group
Kata Containers architecture committee member, Nydus maintainer, and Linux kernel developer.
avatar for Fog Dong

Fog Dong

Senior Software Engineer, BentoML
董天欣目前在 BentoML担任资深工程师,同时,她也是 KubeVela 的核心维护者以及 CNCF 大使。她致力于开源社区的建设,并不遗余力地为推动开源项目的发展而努力,尤其是在云原生 DevOps 领域。目前,她在 BentoML... Read More →
avatar for Gorkem Ercan

Gorkem Ercan

CTO, Jozu
Gorkem Ercan is a co-founder and CTO of Jozu. Gorkem has experience working and leading teams with various technologies ranging from building IDEs, to building mobile phones, and CI/CD systems. He is an avid contributor and supporter of open source and previously served at the Eclipse... Read More →
Tuesday June 10, 2025 11:45 - 12:15 HKT
Level 19 | Crystal Court I
  AI + ML

13:45 HKT

Fast and Furious: Practice in Horizon Robotics on Large-scale End-to-end Model Training - Chen Yangxue, Horizon Robotics & Zhihao Xu, Alibaba Cloud
Tuesday June 10, 2025 13:45 - 14:15 HKT
End-to-end large model training is crucial for advancing autonomous driving technology. Horizon Robotics leads in this field by leveraging deep learning algorithms and chip design. They efficiently train and deploy advanced perception models like Sparse4D using cloud-native technologies.
Training these models poses challenges, such as managing massive video data and numerous small files. Ensuring high-performance training with over 2000 GPUs on RDMA, quickly identifying different failures, and diagnosing issues in large-scale training.
This session covers how Horizon Robotics manages large-scale training on Kubernetes. It highlights the role of distributed data caching, network topology awareness, and job affinity scheduling in optimizing a 2000 GPU training job. We'll also discuss strategies for restoring interrupted training jobs through backup machine replacement to enhance task resilience. Furthermore, experiences with CNCF projects like Volcano, Fluid, and NPD will be shared.
Speakers
avatar for Zhihao Xu

Zhihao Xu

Software Engineer, Alibaba Cloud
Zhihao Xu is currently a software engineer at Alibaba Cloud focusing on infrastructure for AI model training and large-scale model inference. Also, he is now a Maintainer of the CNCF sandbox project Fluid, which is designed for data orchestration for data-intensive applications running... Read More →
avatar for Chen Yangxue

Chen Yangxue

Software Engineer, Horizon Robotics
I'm Chen Yangxue, a software engineer at Horizon Robotics. With years of cloud - native experience, I'm building a ten - thousand - card training platform with a hybrid cloud setup.I've used tools like Kubernetes, Volcano, etc., to solve tough technical problems. I know how to optimize... Read More →
Tuesday June 10, 2025 13:45 - 14:15 HKT
Level 19 | Crystal Court I
  AI + ML
  • Content Experience Level Any
  • Presentation Language Chinese

14:30 HKT

More Than Model Sharding: LWS & Distributed Inference - Peter Pan & Nicole Li, DaoCloud
Tuesday June 10, 2025 14:30 - 15:00 HKT
Large LLM like Llama3.1-405B or Deepseek-V3 (671B), require distributed inference across multiple-nodes like vLLM + Ray backend.
However, it's more than just model-slicing with tensor-parallelism, Native K8S treats those workloads across nodes irrelevantly , so challenges come:
- standalone statefulSets without coordination
- demand of Gang-scheduling
- uncontrolled startup order among master & workers, causing boot lag
- HPA as a whole instead of for each sts, to scale together for both Ray head/worker.
- stable index and rank
- topology aware grouping
- failure recovery for vllm/pytorch(not smart enough), to avoid one pod/GPU failure disrupting overall inference

----
So LWS - LeaderWorkerSet (github.com/kubernetes-sigs/lws) , is designed to address them:
- to optimize resource coordination with leader-worker set
- improve performance thru co-location
- integrate scaling with HPA for whole lws together
- all-or-nothing restart policy to fault tolerance as a group.
Speakers
avatar for Nicole Li

Nicole Li

Cloud Native Developer, DaoCloud
Cloud Native Developer, Service Mesh & Istio Contributor, AI Newbie
avatar for Peter Pan

Peter Pan

R&D Engineering VP, Daocloud
- DaoCloud Software Engineering VP- Regular KubeCon "Program Committee" : 2023 EU, 2024 HK, 2024 India, 2025 EU- Regular KubeCon Speaker: 2023 SH, 2024 EU, 2024 HK- Maintainer of below CNCF projects : cloudtty, kubean, hwameistor- CNCF WG-AI (AI Working-Group) Member + CNAI white-paper... Read More →
Tuesday June 10, 2025 14:30 - 15:00 HKT
Level 19 | Crystal Court I
  AI + ML

15:30 HKT

Smart GPU Management: Dynamic Pooling, Sharing, and Scheduling for AI Workloads in Kubernetes - Wei Chen, China Unicom Cloud Data & Mengxuan Li, Dynamia
Tuesday June 10, 2025 15:30 - 16:00 HKT
With the rapid growth of AI applications, optimal GPU utilization is essential, particularly in GPU sharing and job scheduling. Balancing performance, flexibility, and isolation is as challenging as the “Impossible Trinity”. Technologies such as vCUDA, MPS, and MIG are promising attempts, but each has its pros and cons. Managing clusters with multiple sharing techniques adds complexity due to differing resource names and configurations.
In this talk, we will demonstrate how to combine these methods easily. Users specify the memory and core count without managing GPU types or sharing methods. Based on user preferences and GPU resources, the best node and method will be selected. Requests are automatically translated into optimal profiles, and GPUs are dynamically partitioned.
This approach streamlines GPU management, enhances utilization, and improves scheduling. By integrating Volcano and HAMi, the solution strengthens GPU pooling and scheduling, optimizing AI workload management.
Speakers
avatar for Mengxuan Li

Mengxuan Li

Software Engineer, Dynamia Inc
Member of volcano community responsible for the development of gpu virtualization mechanism on volcano. It have been merged in the master branch of volcano, and will be released in v1.8. speaker, in OpenAtom Global Open Source Commit#2023
avatar for Wei Chen

Wei Chen

Technical expert, China Unicom Cloud Data Co., Ltd
I am a technical expert at China Unicom Cloud Data Co., Ltd, specializing in cloud computing infrastructure. I actively contribute to open-source projects, including KubeEdge, Openeular iSula, and Volcano.
Tuesday June 10, 2025 15:30 - 16:00 HKT
Level 19 | Crystal Court I
  AI + ML
  • Content Experience Level Any
  • Presentation Language Chinese

16:15 HKT

Introducing AIBrix: Cost-Effective and Scalable Kubernetes Control Plane for VLLM - Jiaxin Shan & Liguang Xie, ByteDance
Tuesday June 10, 2025 16:15 - 16:45 HKT
Managing large-scale LLM inference workloads on Kubernetes requires more than just high-performance inference engines like vLLM. It demands a comprehensive control plane that integrates deeply with engines while addressing the complexities of large-scale operations. This need inspired the creation of AIBrix, a Kubernetes-native control plane designed to scale LLM inference with modularity, flexibility, and cutting-edge algorithms.

AIBrix introduces a pluggable architecture with components for LLM specific autoscaling, high-density lora management, distributed KV cache, heterogenous serving, model loading etc. AIBrix emphasizes deep co-design with inference engines, enabling advanced features and optimizations. This talk will demonstrate AIBrix in action, showcasing its ability to improve scalability and optimize resource utilization. Additionally, we will present detailed benchmarks to evaluate the performance of these components, providing actionable insights for practitioners.
Speakers
avatar for Jiaxin

Jiaxin

Software Engineer, Bytedance
Jiaxin works at ByteDance Infrastructure Lab, focusing on serverless and AI infrastructure. He is also a co-chair of Kubernetes WG-Serving, Jiaxin drives innovations and contributes to the future of scalable AI systems.
avatar for Liguang Xie .

Liguang Xie .

Director of Engineering, ByteDance
Liguang Xie is an Engineering Lead at ByteDance’s Compute Infrastructure Team, leading next-gen serverless infrastructure design and overseeing open-source, research, and engineering efforts. He has extensive experience in large-scale distributed systems, AI/ML platforms, and LLM/GNN... Read More →
Tuesday June 10, 2025 16:15 - 16:45 HKT
Level 19 | Crystal Court I
  AI + ML

17:00 HKT

Portrait Service: AI-Driven PB-Scale Data Mining for Cost Optimization and Stability Enhancement - Yuji Liu & Zhiheng Sun, Kuaishou
Tuesday June 10, 2025 17:00 - 17:30 HKT
Kuaishou's Kubernetes-based platform manages 200,000+ machines and 10M+ Pods, generating 10TB+ daily data. AI-driven intelligent portrait service enhances stability and performance:
● Stability Management: AI analyzes system and workload metrics to generate machine health scores, integrated into Kubernetes scheduling to evict/avoid unhealthy nodes. This reduced pod creation delays from 20 to 0.1 cases/day and boosted service availability from 90% to 99.99%.
● Performance Optimization:
Serving 10,000+ services with diverse resource sensitivities (compute-, cache-, and IO-intensive), we combine AI with microarchitecture data to pinpoint bottlenecks and create application profiles. Optimizing resource allocation (compute, cache, memory bandwidth) has increased average IPC by 20% and reduced LLC miss rates for cache-sensitive services from over 50% to 10%.
Future plans include integrating AI Agent technology to automate anomaly detection and reduce manual operations by 80%.
Speakers
avatar for Yuji Liu

Yuji Liu

Software Engineer, Kuaishou Technology
Container cloud engineer from Kuaishou.
avatar for Zhiheng Sun

Zhiheng Sun

Senior Software Engineer, Kuaishou
I am a cloud-native engineer at kwaishou, specializing in application performance improvement on Kubernetes. I also have led the open-local, a cloud-native local storage project in the open-source community.
Tuesday June 10, 2025 17:00 - 17:30 HKT
Level 19 | Crystal Court I
  AI + ML
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.