ERDQ-LEARNING: MÔ HÌNH HỌC TĂNG CƯỜNG TÍCH HỢP HPA PHÂN CHIA TỶ LỆ POD SERVERLESS TRONG KUBERNETES

Abstract

In serverless cloud computing environments, maintaining performance and Quality of Service (QoS) under fluctuating traffic conditions remains a significant challenge. The traditional Horizontal Pod Autoscaler (HPA) in Kubernetes, while fundamental, relies solely on fixed CPU utilization thresholds to scale Pods,
often resulting in delayed responses or inefficient orchestration in dynamic load scenarios. This paper proposes an enhanced reinforcement learning approach, ERDQ-learning, which integrates Experience Replay and Double Q-learning to optimize the scaling behavior of serverless Pods. The ERDQ-learning
model dynamically adjusts HPA’s CPU activation threshold in real time, using system state parameters such as CPU utilization, current Pod count, response latency, and the existing threshold. Experimental evaluations in a simulated Kubernetes environment with varying traffic patterns demonstrate that ERDQlearning significantly enhances system adaptability, reduces latency, and improves resource efficiency compared to the traditional HPA. These results highlight the feasibility and effectiveness of the proposed model for intelligent resource orchestration in modern serverless systems. 

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2025 Array