RT Conference Proceedings T1 MAMUT: Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-User Video Transcoding A1 Costero Valero, Luis María A1 Iranfar, Arman A1 Zapater Sancho, Marina A1 Igual Peña, Francisco Daniel A1 Olcoz Herrero, Katzalin A1 Atienza Alonso, David AB Real-time video transcoding has recently raised as a valid alternative to address the ever-increasing demand for video contents in servers’ infrastructures in current multi-userenvironments. High Efficiency Video Coding (HEVC) makes efficient online transcoding feasible as it enhances user experience by providing the adequate video configuration, reduces pressure on the network, and minimizes inefficient and costly video storage. However, the computational complexity of HEVC, together with its myriad of configuration parameters, raises challenges for power management, throughput control, and Quality of Service (QoS) satisfaction. This is particularly challenging in multi-user environments where multiple users with different resolution demands and bandwidth constraints need to be served simultaneously. In this work, we present MAMUT, a multiagent machine learning approach to tackle these challenges. Our proposal breaks the design space composed of run-time adaptation of the transcoder and system parameters into smaller sub-spaces that can be explored in a reasonable time by individual agents. While working cooperatively, each agent is in charge of learning and applying the optimal values for internal HEVC and system-wide parameters. In particular, MAMUT dynamically tunes Quantization Parameter, selects number of threads per video, and sets the operating frequency with throughput and video quality objectives under compression and power consumption constraints. We implement MAMUT on an enterprise multicore server and compare equivalent scenarios to state-ofthe-art alternative approaches. The obtained results reveal that MAMUT consistently attains up to 8x improvement in terms of FPS violations (and thus Quality of Service), 24% power reduction, as well as faster and more accurate adaptation both to the video contents and available resources. YR 2019 FD 2019 LK https://hdl.handle.net/20.500.14352/96486 UL https://hdl.handle.net/20.500.14352/96486 LA eng DS Docta Complutense RD 6 oct 2024