Dynamic Scaling of Machine Learning Workloads: A Comparative Study of On-Prem and Cloud-Based Containers

Authors

  • Vinay Kumar Deeti Arrowstreet Capital, Limited Partnership, USA

Keywords:

dynamic scaling, machine learning workloads, containers, Kubernetes, resource orchestration

Abstract

This work aims to provide a thorough study of dynamic scaling mechanism for machine learning (ML) workloads, thereby stressing the operational trade-offs between on-site and cloud-based containerized systems. Under different workloads, this study primarily addresses performance elasticity, resource consumption efficiency, orchestration delay, and cost-effectiveness.

Downloads

Download data is not yet available.

Downloads

Published

09-08-2023

How to Cite

[1]
“Dynamic Scaling of Machine Learning Workloads: A Comparative Study of On-Prem and Cloud-Based Containers”, J. Computational Intel. & Robotics, vol. 3, no. 2, pp. 123–137, Aug. 2023, Accessed: Mar. 07, 2026. [Online]. Available: https://thesciencebrigade.org/jcir/article/view/613