Person:
Gómez Pérez, José Ignacio

Loading...
Profile Picture
First Name
José Ignacio
Last Name
Gómez Pérez
Affiliation
Universidad Complutense de Madrid
Faculty / Institute
Informática
Department
Arquitectura de Computadores y Automática
Area
Arquitectura y Tecnología de Computadores
Identifiers
UCM identifierORCIDScopus Author IDDialnet IDGoogle Scholar ID

Search Results

Now showing 1 - 10 of 12
  • Item
    Project number: 126
    Herramientas para el diseño y gestión de Guías Docentes digitales
    (2021) García Payo, M. Carmen; Aranda Iriarte, José Ignacio; Franco Peláez, Francisco Javier; Tenllado Van Der Reijden, Christian Tomás; García Sánchez, Carlos; Gómez Pérez, José Ignacio; Riveira Martín, Mercedes del Carmen; Sanmartino Rodríguez, Julio Antonio; Payo Rubio, Marina; Pino Hernández, Javier; Díaz Núñez, Guillermo Jesús; Villar Serrano, Daniel
    El objetivo de este proyecto es elaborar una herramienta web que permita a los profesores actualizar las fichas docentes de su asignatura de forma online mediante un formulario web, almacenando la información de las guías en una base de datos, de modo que el sistema señale los cambios realizados, gestione el acceso y permisos de los usuarios, y permita exportar y generar las fichas de las asignaturas en diversos formatos respetando los apartados y condiciones de la Memoria de Verificación (VERIFICA).
  • Item
    Project number: 172
    Integración de los servicios para.TI@UCM en una plataforma de e-learning similar al Campus Virtual
    (2014) Sánchez-Elez Martín, Marcos; Risco Martín, José Luis; Pardines Lence, María Inmaculada; Garnica Alcázar, Antonio Óscar; Miñana Ropero, María Guadalupe; Gómez Pérez, José Ignacio; Olcoz Herrero, Katzalin; Chaver Martínez, Daniel Ángel; Castro Rodríguez, Fernando; Sáez Alcaide, Juan Carlos; Igual Peña, Francisco Daniel
    La integración de los servicios para.TI@UCM en nuestra Universidad hace plantearnos nuevas metodologías docentes y de evaluación en el proceso de enseñanza-aprendizaje. Este proyecto surge como continuación del proyecto PIMCD UCM 138 (2013) titulado “Uso de los servicios para.TI@UCM para integrar tareas docentes y fomentar el aprendizaje activo y colaborativo de los alumnos” desarrollado por este mismo grupo de profesores. Como resultado de este proyecto se han elaborado una serie de tutoriales sobre el uso de las aplicaciones de Google en el ámbito de las tareas docentes como herramientas útiles para fomentar el aprendizaje de los alumnos. Partiendo del nuevo marco docente creado en el PIMCD UCM 138 (2013) donde tanto el material docente como las actividades propuestas a los alumnos se desarrollan en la nube, el objetivo de este nuevo proyecto es conseguir integrar todas las aplicaciones necesarias para un desarrollo completo de la actividad docente en la nube (para.TI@UCM), tanto las propietarias de Google como las desarrolladas por terceros. Nuestro objetivo es intentar crear una plataforma de e-learning similar al Campus Virtual. Para realizar esta tarea será necesario realizar un estudio, por un lado, de las funcionalidades que ofrece el Campus Virtual, y por otro, de cuáles de estas funcionalidades están disponibles en los recursos para.TI@UCM. El siguiente paso sería plantear cómo se pueden implementar las funcionalidades buscadas y no encontradas en para.TI@UCM usando como base las aplicaciones de Google.
  • Item
    Adaptive mapping and parameter selection scheme to improve automatic code generation for GPUs.
    (2014) Juega, J.C; Gómez Pérez, José Ignacio; Tenllado Van Der Reijden, Christian Tomás; Catthoor, F.
    Graphics Processing Units (GPUs) are today’s most powerful coprocessors for accelerating massive data-parallel algorithms. However, programmers are forced to adopt new programming paradigms to take full advantage of their computing capabilities; this requires significant programming and maintenance effort. As a result, there is an increasing interest in the development of tools for automatic mapping of sequential code to GPUs. Current automatic tools require both a deep knowledge on the GPU architecture and the algorithm being mapped, which makes the mapping process a labor-intensive task. This paper proposes a technique that improves the code mapping of one of these tools, PPCG, removing the need for any user interaction. It relies on data reuse estimations to explore the mapping space and compute appropriate values for the number of threads per threadblock and tile sizes. Our results show speedups of 3x on average compared to the default code generated by PPCG.
  • Item
    Project number: 346
    Generación automática de informes del programa Docentia para las memorias de seguimiento de los centros
    (2015) López Orozco, José Antonio; Díaz Agudo, María Belén; Piñuel Moreno, Luis; Chaver Martínez, Daniel Ángel; Gómez Pérez, José Ignacio; Castro Rodríguez, Fernando; García Sánchez, Carlos; Tenllado Van Der Reijden, Christian Tomás
    Informe final del Proyecto de Innovación y Mejora de la Calidad Docente 346 de la convocatoria 2014.
  • Item
    Improving the representativeness of simulation intervals for the cache memory system
    (IEEE Access, 2024) Bueno Mora, Nicolás; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Gómez Pérez, José Ignacio; Catthor, Francky
    Accurate simulation techniques are indispensable to efficiently propose new memory or architectural organizations. As implementing new hardware concepts in real systems is often not feasible, cycle-accurate simulators employed together with certain benchmarks are commonly used. However, detailed simulators may take too much time to execute these programs until completion. Therefore, several techniques aimed at reducing this time are usually employed. These schemes select fragments of the source code considered as representative of the entire application’s behaviour–mainly in terms of performance, but not plenty considering the behaviour of cache memory levels–and only these intervals are simulated. Our hypothesis is that the different simulation windows currently employed when evaluating microarchitectural proposals, especially those involving the last level cache (LLC), do not reproduce the overall cache behaviour during the entire execution, potentially leading to wrong conclusions on the real performance of the proposals assessed. In this work, we first demonstrate this hypothesis by evaluating different cache replacement policies using various typical simulation approaches. Consequently, we also propose a simulation strategy, based on the applications’ LLC activity, which mimics the overall behaviour of the cache much closer than conventional simulation intervals. Our proposal allows a fairer comparison between cache-related approaches as it reports, on average, a number of changes in the relative order among the policies assessed – with respect to the full simulation – more than 30% lower than that of conventional strategies, maintaining the simulation time largely unchanged and without losing accuracy on performance terms, especially for memory-intensive applications.
  • Item
    Time-Dependent Electromigration Modeling for Workload-Aware Design-Space Exploration in STT-MRAM
    (IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2022) Mayahinia, Mahta; Tahoori, Mehdi; Komalan, Manu Perumkunnil; Zahedmanesh, Houman; Croes, Kristof; Marinelli, Tommaso; Gómez Pérez, José Ignacio; Evenblij, Timon; Kar, Gouri Sankar; Catthoor, Francky
    Electromigration (EM) has been known as a reliability threatening factor for back-end-of-the-line interconnects. Spin Transfer Torque Magnetic RAM (STT-MRAM) is an emerging non-volatile memory that has gained a lot of attention in recent years. However, relatively large operational current magnitude is a challenge for this technology, and hence, EM can be a potential reliability concern, even for the signal lines of this memory. A workload-aware EM modeling needs to capture time-dependent current density in the memory signal lines, and to be able to predict the effect of the EM phenomenon on the interconnect for its entire lifetime. In this work, we present methods to effectively model the workload-dependent EM-induced mean time to failure (MTTF) in typical STT-MRAM arrays under a variety of realistic workloads. This allows performing the design space exploration to co-optimize reliability and other design metrics.
  • Item
    Microarchitectural Exploration of STT-MRAM Last-level Cache Parameters for Energy-efficient Devices
    (ACM Transactions on Embedded Computing Systems (TECS), 2022) Komalan, Manu; Gupta, Mohit; Catthoor, Francky; Gómez Pérez, José Ignacio; Marinelli, Tommaso; Tenllado Van Der Reijden, Christian Tomás
    As the technology scaling advances, limitations of traditional memories in terms of density and energy become more evident. Modern caches occupy a large part of a CPU physical size and high static leakage poses a limit to the overall efficiency of the systems, including IoT/edge devices. Several alternatives to CMOS SRAM memories have been studied during the past few decades, some of which already represent a viable replacement for different levels of the cache hierarchy. One of the most promising technologies is the spin-transfer torque magnetic RAM (STT-MRAM), due to its small basic cell design, almost absent static current and nonvolatility as an added value. However, nothing comes for free, and designers will have to deal with other limitations, such as the higher latencies and dynamic energy consumption for write operations compared to reads. The goal of this work is to explore several microarchitectural parameters that may overcome some of those drawbacks when using STT-MRAM as last-level cache (LLC) in embedded devices. Such parameters include: number of cache banks, number of miss status handling registers (MSHRs) and write buffer entries, presence of hardware prefetchers. We show that an effective tuning of those parameters may virtually remove any performance loss while saving more than 60% of the LLC energy on average. The analysis is then extended comparing the energy results from calibrated technology models with data obtained with freely available tools, highlighting the importance of using accurate models for architectural exploration.
  • Item
    COMPAD: A heterogeneous cache-scratchpad CPU architecture with data layout compaction for embedded loop-dominated applications
    (Journal of Systems Architecture, 2023) Marinelli, Tommaso; Gómez Pérez, José Ignacio; Tenllado Van Der Reijden, Christian Tomás; Catthoor, Francky
    The growing trend of pervasive computing has consolidated the everlasting need for power efficient devices. The conventional cache subsystem of general-purpose CPUs, while being able to adapt to many use cases, suffers from energy inefficiencies in some scenarios. It is well-known by now in the academic literature that the utilization of a scratchpad memory (SPM) can help reducing the overall energy consumption of embedded systems. This work proposes a hybrid cache-SPM architecture with support logic for semi-transparent data management and spatial locality improvement. Selected data are transferred and stored in the SPM in a compact form using dynamic layout transformation. As a second major contribution, we introduce a methodology to identify memory access sequences that make an inefficient use of the cache, marking them as candidates to be moved to an SPM of constrained space. The methodology does not require access to the source code of the target applications, relying on binary instrumentation and offline profiling. The resulting mapping policies have been tested on a simulated system, showing a mean memory dynamic energy reduction of 43% and a mean speed gain of 13% with a representative benchmark set.
  • Item
    A Comparative Analysis on the Impact of Bank Contention in STT-MRAM and SRAM Based LLCs
    (2019) Evenblij, Timmon; Komalan, Manu Perumkunnil; Catthoor, Franky; Sakhare, Sushil; Debacker, Peter; Kar, Gouri; Furnemont, Arnaud; Bueno Mora, Nicolás; Gómez Pérez, José Ignacio; Tenllado Van Der Reijden, Christian Tomás; IEEE
    Spin Transfer Torque Magnetic RAM (STT-MRAM) is being extensively considered as a promising replacement for Last Level Caches (LLC), due to its high density, low leakage and non-volatility. However, writes to STT-MRAM are energy intensive and have a high latency. While the high dynamic energy consumption during writes can be compensated by the low static energy consumption, the high latency results in performance degradation. This work shows that in contrast to SRAM-based LLCs, the performance degradation for STT-MRAM is primarily due to bank contention, when trying to satisfy a read request while the bank is being written. We holistically explore the effects of cache banking and cache contention on energy and performance in the LLC of mobile multicore systems, with in-order cores or with out-of-order cores. The detail of the analysis is enabled by highly accurate cache models, based on a 28nm SRAM industry compiler, and an in-house developed STT-MRAM compiler, which generates full STT-MRAM macro designs with silicon-validated MTJ stack and complete parasitic extraction at the 28nm node. Our results show that there is a clear difference in the energy-performance optimal banking configuration between STT-MRAM caches and SRAM caches. These low contention STT-MRAM cache designs with the optimal number of banks save at least 60% cache energy while losing at most single digit percentages in system performance compared to SRAM cache designs. This show an increased potential of using STT-MRAM as a replacement for SRAM in an LLC.
  • Item
    Project number: 151
    Virtualización de Laboratorios de la Materia Sistemas Operativos y Redes mediante Contenedores
    (2023) Sánchez-Elez Martín, Marcos; Pardines Lence, María Inmaculada; Gómez Pérez, José Ignacio; Moreno Vozmediano, Rafael Aurelio; Olcoz Herrero, Katzalin; Risco Martín, José Luis; Ruiz Gallego-Largo, Rafael; Soria Jiménez, David; Miñana Ropero, María Guadalupe; Molina Prego, María Del Carmen; Sánchez Muñoz, Eduardo