Una nueva arquitectura de simulación distribuida dirigida por eventos
Loading...
Download
Official URL
Full text at PDC
Publication date
2020
Defense date
2020
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Citation
Abstract
Los computadores actuales son sistemas Multi-Core, en los cuales cada procesador contiene varios núcleos de ejecución. Estos sistemas de memoria compartida pueden utilizarse de manera individual o combinarse para formar un supercomputador de memoria distribuida. Aprovechar estos dos niveles de paralelismo en estas máquinas es importante para conseguir explotar al máximo su capacidad de cálculo, lo que constituye un área de investigación de gran interés en la actualidad.
Por otro lado, el paradigma de la computación en la nube ofrece a los usuarios un conjunto casi ilimitado de recursos en un modelo de pago por uso. Para que las aplicaciones que usan este paradigma escalen adecuadamente es necesario proporcionar a los usuarios de estos sistemas herramientas capaces de explotar los recursos contratados, ya sea un procesador de memoria compartida o un clúster de memoria distribuida. Para ello son muy útiles los contenedores, como Docker, que permiten a los desarrolladores de aplicaciones incluir en el contenedor todo el entorno de ejecución. Así, los usuarios pueden disponer de la aplicación correctamente configurada sin más que elegir la imagen adecuada para su sistema.
Un grupo de aplicaciones de especial relevancia son los simuladores, que se usan ampliamente en el campo científico para analizar la viabilidad de ciertos sistemas. La simulación, y en particular la simulación de eventos discretos, se ha exportado a la nube durante la última década bajo el paradigma "simulación como servicio". No obstante, se ha demostrado que este formato de simulación es ciertamente limitante, y exclusivo de sistemas grandes, dejando de lado su ejecución en sistemas de memoria compartida. En este trabajo de fin de máster se analizan nuevos protocolos de simulación distribuida desde puntos de vista más pragmáticos, de acuerdo con los desarrollos tecnológicos tanto de los actuales procesadores como de la nube en sí.
Today's computers are Multi-Core systems, in which each processor contains several execution cores. These shared memory systems can be used individually or combined to form a distributed memory supercomputer. Taking advantage of these two levels of parallelism in these machines is important in order to get the most out of their computing capacity, which is an area of research of great interest at present. On the other hand, the cloud computing paradigm offers users an almost unlimited set of resources in a pay-per-use model. For applications using this paradigm to scale properly, users of these systems need to be provided with tools capable of exploiting the resources contracted, whether a shared memory processor or a distributed memory cluster. For this purpose, containers, such as Docker, are very useful, allowing application developers to include the entire execution environment in the container. This way, users can have the application correctly configured only having to choose the right image for their system. A group of applications of special relevance are simulators, which are widely used in the scientific field to analyze the viability of certain systems. Simulation, and in particular the simulation of discrete events, has been exported to the cloud over the last decade under the "simulation as a service" paradigm. However, it has been shown that this simulation format is certainly limiting, and exclusive of large systems, leaving aside its execution in shared memory systems. In this Master Thesis work, new distributed simulation protocols are analyzed from a more pragmatic point of view, in accordance with technological developments both in current processors and in the cloud itself.
Today's computers are Multi-Core systems, in which each processor contains several execution cores. These shared memory systems can be used individually or combined to form a distributed memory supercomputer. Taking advantage of these two levels of parallelism in these machines is important in order to get the most out of their computing capacity, which is an area of research of great interest at present. On the other hand, the cloud computing paradigm offers users an almost unlimited set of resources in a pay-per-use model. For applications using this paradigm to scale properly, users of these systems need to be provided with tools capable of exploiting the resources contracted, whether a shared memory processor or a distributed memory cluster. For this purpose, containers, such as Docker, are very useful, allowing application developers to include the entire execution environment in the container. This way, users can have the application correctly configured only having to choose the right image for their system. A group of applications of special relevance are simulators, which are widely used in the scientific field to analyze the viability of certain systems. Simulation, and in particular the simulation of discrete events, has been exported to the cloud over the last decade under the "simulation as a service" paradigm. However, it has been shown that this simulation format is certainly limiting, and exclusive of large systems, leaving aside its execution in shared memory systems. In this Master Thesis work, new distributed simulation protocols are analyzed from a more pragmatic point of view, in accordance with technological developments both in current processors and in the cloud itself.
Description
Trabajo de Fin de Máster en Máster en Ingeniería Informática, Facultad de Informática UCM, Departamento de Arquitectura de Computadores y Automática, Curso 2019/2020