Please use this identifier to cite or link to this item: https://dspace.ncfu.ru/handle/20.500.12258/18614
Title: Dynamic performance-Energy tradeoff consolidation with contention-aware resource provisioning in containerized clouds
Authors: Babenko, M. G.
Бабенко, М. Г.
Keywords: Dynamic performance;Сontainerized clouds;Job concentration paradigm
Issue Date: 2022
Publisher: Public Library of Science
Citation: Canosa-Reyes R. M., Tchernykh A., Cortés-Mendoza J. M., Pulido-Gaytan B., Rivera-Rodriguez R., Lozano-Rizk J. E., Concepción-Morales E. R., Barrera H. E. C., Barrios-Hernandez C. J., Medrano-Jaimes F., Avetisyan A., Babenko, M. G. Dynamic performance-Energy tradeoff consolidation with contention-aware resource provisioning in containerized clouds // PLoS ONE. - 2022. - Том 17. - Выпуск 1 January. - Номер статьи e0261856. - DOI10.1371/journal.pone.0261856
Series/Report no.: PLoS ONE
Abstract: Containers have emerged as a more portable and efficient solution than virtual machines for cloud infrastructure providing both a flexible way to build and deploy applications. The quality of service, security, performance, energy consumption, among others, are essential aspects of their deployment, management, and orchestration. Inappropriate resource allocation can lead to resource contention, entailing reduced performance, poor energy efficiency, and other potentially damaging effects. In this paper, we present a set of online job allocation strategies to optimize quality of service, energy savings, and completion time, considering contention for shared on-chip resources. We consider the job allocation as the multilevel dynamic bin-packing problem that provides a lightweight runtime solution that minimizes contention and energy consumption while maximizing utilization. The proposed strategies are based on two and three levels of scheduling policies with container selection, capacity distribution, and contention-aware allocation. The energy model considers joint execution of applications of different types on shared resources generalized by the job concentration paradigm. We provide an experimental analysis of eighty-six scheduling heuristics with scientific workloads of memory and CPU-intensive jobs. The proposed techniques outperform classical solutions in terms of quality of service, energy savings, and completion time by 21.73-43.44%, 44.06-92.11%, and 16.38-24.17%, respectively, leading to a costefficient resource allocation for cloud infrastructures.
URI: http://hdl.handle.net/20.500.12258/18614
Appears in Collections:Статьи, проиндексированные в SCOPUS, WOS

Files in This Item:
File SizeFormat 
scopusresults 2032 .pdf
  Restricted Access
150.78 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.