Person:
Murillo Montero, Raúl

Loading...
Profile Picture
First Name
Raúl
Last Name
Murillo Montero
Affiliation
Universidad Complutense de Madrid
Faculty / Institute
Ciencias Físicas
Department
Arquitectura de Computadores y Automática
Area
Identifiers
UCM identifierORCIDScopus Author IDDialnet ID

Search Results

Now showing 1 - 9 of 9
  • Item
    HUB meets posit: arithmetic units implementation
    (IEEE Transactions on Circuits and Systems II: Express Briefs, 2024) Murillo Montero, Raúl; Hormigo, Javier; Del Barrio García, Alberto Antonio; Botella Juan, Guillermo
    The posit (TM) format was introduced in 2017 as an alternative to replacing the widespread IEEE 754. Posit arithmetic provides reproducible results across platforms and possesses tapered accuracy, among other improvements. Nevertheless, despite the advantages provided by such a format, their functional units are not as competitive as the IEEE 754 ones yet. The HUB approach was presented in 2016 to reduce the hardware cost of floating-point units. In this brief, we present HUB posit, a new format to mitigate the hardware overhead of posit units. Results show that it is possible to reach up to 15% and 12% in terms of area-delay product for adders and multipliers, respectively, while maintaining a similar level of accuracy. In addition, synthesis results show that HUB posit units are able to reach higher frequencies than conventional ones.
  • Item
    Generating posit-based accelerators with high-level synthesis
    (IEEE Transactions on Circuits and Systems I: Regular Papers, 2023) Murillo Montero, Raúl; Del Barrio García, Alberto Antonio; Botella Juan, Guillermo; Pilato, Christian
    Recently, the posit number system has demonstrated a higher accuracy over standard floating-point arithmetic for many scientific applications. However, when it comes to implementing accelerators for these applications, the tool support for this arithmetic format is still missing, especially during the step. In this paper, we incorporate the posit data type into the high-level synthesis (HLS) design process, so that we can generate the implementation directly from a given behavioral specification, but using posit numbers instead of the classical floating-point notations. Our evaluations show that, even if posit-based circuits require more area than their floating-point counterparts, they offer higher accuracy when using the same bitwidth. For example, using posit arithmetic can reduce computation errors by about two orders of magnitude when compared to using standard floating-point numbers. Our approach also includes an alternative to mitigate the high overheads of the posits and broadening the potential use of this format. We also propose a hybrid scheme that uses posit numbers only in the private local memory, while the accelerator operates in the classic floating-point notation. This solution is useful when the designers want to optimize local memories and data transfers, but still use legacy high-level synthesis (HLS) tools that only support traditional floating-point notations.
  • Item
    Efectos de la Precisión Numérica en las Aplicaciones Cientı́ficas
    (2022) Murillo Montero, Raúl; Del Barrio García, Alberto Antonio; Botella Juan, Guillermo
    A lo largo de las últimas décadas, han aparecido múltiples formatos como alternativas al estándar IEEE 754™ para la aritmética de coma flotante. El uso de formatos de precisión reducida, como bfloat16, o de nuevos sistemas aritméticos, como posit™, tiene un interés creciente no solo en el ámbito del aprendizaje automático, sino que también los algoritmos numéricos de propósito general pueden beneficiarse de esos formatos no estándar, reduciendo el tiempo de cálculo, el consumo de energía o los requisitos de memoria. Sin embargo, no siempre se dispone de hardware dedicado para los nuevos formatos aritméticos, y la simulación de diferentes precisiones numéricas es esencial para experimentar con esos formatos alternativos aún no implementados en hardware. En este trabajo, examinamos mediante emulación de software los efectos que tienen diferentes formatos aritméticos y precisiones numéricas en una amplia variedad de aplicaciones científicas. Demostramos los límites de la precisión numérica en cada aplicación, así como las ventajas de utilizar un formato aritmético u otro en cada situación. Los experimentos de este trabajo revelan que, con el mismo ancho de bits, la aritmética posit proporciona un error de hasta dos órdenes de magnitud menor que el formato de coma flotante.
  • Item
    Leveraging Posit Arithmetic in Deep Neural Networks
    (2021) Murillo Montero, Raúl; Barrio García, Alberto Antonio del; Botella Juan, Guillermo
    The IEEE 754 Standard for Floating-Point Arithmetic has been for decades imple mented in the vast majority of modern computer systems to manipulate and com pute real numbers. Recently, John L. Gustafson introduced a new data type called positTM to represent real numbers on computers. This emerging format was designed with the aim of replacing IEEE 754 floating-point numbers by providing certain ad vantages over them, such as a larger dynamic range, higher accuracy, bitwise iden tical results across systems, or simpler hardware, among others. The interesting properties of the posit format seem to be really useful under the scenario of deep neural networks. In this Master’s thesis, the properties of posit arithmetic are studied with the aim of leveraging them for the training and inference of deep neural networks. For this purpose, a framework for neural networks based on the posit format is developed. The results show that posits can achieve similar accuracy results as floating-point numbers with half of the bit width without modifications in the training and infer ence flows of deep neural networks. The hardware cost of the posit arithmetic units needed for operating with neural networks (this is, additions and multiplications) is also studied in this work, obtaining great improvements in terms of area and power savings with respect state-of-the-art implementations.
  • Item
    Study of the posit number system: a practical approach
    (2019) Murillo Montero, Raúl; Barrio García, Alberto Antonio del; Botella Juan, Guillermo
    The IEEE Standard for Floating-Point Arithmetic (IEEE 754) has been for decades the standard for floating-point arithmetic and is implemented in a vast majority of modern computer systems. Recently, a new number representation format called posit (Type III unum) introduced by John L. Gustafson – who claims this new format can provide higher accuracy using equal or less number of bits and simpler hardware than current standard – is proposed as an alternative to the now omnipresent IEEE 754 arithmetic. In this Bachelor dissertation, the novel posit number format, its characteristics and properties – presented in literature – are analyzed and compared with the standard for floating-point numbers (floats). Based on the literature assertions, we focus on determining whether posits would be a good “drop-in replacement” for floats. With the help of Wolfram Mathematica and Python, different environments are created to compare the performance of IEEE 754 floating-point standard with Type III unum: posits. In order to get a more practical approach, first, we propose different numerical problems to compare the accuracy of both formats, including algebraic problems and numerical methods. Then, we focus on the possible use of posits in Deep Learning problems, such as training artificial Neural Networks or preforming low-precision inference on Convolutional Neural Networks. To conclude this work, we propose a low-level design for posit arithmetic multiplier using the FloPoCo tool to generate synthesizable VHDL code.
  • Item
    Project number: 201
    SUPERSONIC-V: deSarrollo de entornos virtUales Para dEspliegue de haRdware baSadO eN rIsC-V
    (2023) del Barrio García, Alberto Antonio; Botella Juan, Guillermo; Piñuel Moreno, Luis; Roa Romero, Carlos; Murillo Montero, Raúl; Mallasén Quintana, David
    Tradicionalmente la docencia en el área de Arquitectura y Tecnología de Computadores durante todo el grado se centra en explicar conceptos relacionados con la construcción de un procesador. No obstante, las prácticas de laboratorio en general no tratan con la implementación de un procesador real. Desde 2010 ha aparecido la ISA open-source RISC-V, la cual permite añadir instrucciones y modificar los cores desarrollados a partir de ésta. Una muestra de esta característica son los 89 cores RISC-V que se encuentran disponibles en la comunidad científica. No obstante, para trabajar con las herramientas que hacen posible modificar la ISA y simular programas, es necesario invertir mucho tiempo en general, con lo que los estudiantes no emplean tanto tiempo en aplicar conceptos arquitectónicos de manera práctica, sino que lo pierden instalando las toolchain RISC-V, simuladores, etc. Por tanto, en este proyecto planteamos el desarrollo de entornos virtuales que contengan las herramientas necesarias para trabajar con la ISA RISC-V, de tal forma que los estudiantes solo tengan que centrarse en las prácticas per sé. Como caso de uso, se presentan una máquina virtual y un docker con todo lo necesario para trabajar con el core CVA6.
  • Item
    Posit Arithmetic Units for Deep Neural Networks
    (2021) Murillo Montero, Raúl; Del Barrio García, Alberto Antonio; Botella Juan, Guillermo
    Posit™ arithmetic is a recent alternative format to the IEEE 754 standard for floating-point numbers that claims to provide compelling advantages over floats, including higher accuracy, larger dynamic range or bitwise compatibility across systems. In particular, this format is a suitable candidate to replace floating-point numbers in Deep Neural Networks (DNNs), an area of growing interest with a large computational cost. This work presents parameterized designs for multiple posit functional units, including addition, multiplication and multiply-accumulate operation, and integrate them as templates of the FloPoCo framework. Synthesis results show that the proposed arithmetic units significantly reduce the hardware requirements when compared with previous implementations. Finally, this work proposes the use of posit arithmetic for performing both DNN inference and training. Experiments on different datasets, including CIFAR-10, reveal that 16-bit posits can safely replace 32-bit floats for training, and that low-precision 8-bit posits can be used for DNN inference with negligible accuracy drop.
  • Item
    Project number: 315
    Enseñanza de coMputación cuántica Práctica pAra esTudiantes de Informática: Arquitectura y programación (EMPATIA)
    (2021) Botella Juan, Guillermo; Del Barrio García, Alberto Antonio; Carrascal De Las Heras, Ginés; García Sánchez, Carlos; Murillo Montero, Raúl; García Moreno, Daniel; Fahmy Amin, Hesham Ahmed; Mas Aguilar, Juan; Roa Romero, Carlos; Sierra López, Angel
    Plataforma de simulación y computación cuántica basada en hardware de bajo coste y tecnología de contenedores con posibilidad de ejecuciones en la nube. También metodología docente para la primera asignatura en UCM de Computación Cuántica práctica. "Arquitectura y Programación de Computadores Cuánticos" perteneciente a la Facultad de Informática.
  • Item
    PERCIVAL: Open-source posit RISC-V core with quire capability
    (IEEE transactions on emerging topics in computing, 2022) Mallasén Quintana, David; Murillo Montero, Raúl; Del Barrio García, Alberto Antonio; Botella Juan, Guillermo; Prieto Matías, Manuel
    The posit representation for real numbers is an alternative to the ubiquitous IEEE 754 floating-point standard. In this work, we present PERCIVAL, an application-level posit RISC-V core based on CVA6 that can execute all posit instructions, including the quire fused operations. This solves the obstacle encountered by previous works, which only included partial posit support or which had to emulate posits in software. In addition, Xposit, a RISC-V extension for posit instructions is incorporated into LLVM. Therefore, PERCIVAL is the first work that integrates the complete posit instruction set in hardware. These elements allow for the native execution of posit instructions as well as the standard floating-point ones, further permitting the comparison of these representations. FPGA and ASIC synthesis show the hardware cost of implementing 32-bit posits and highlight the significant overhead of including a quire accumulator. However, results show that the quire enables a more accurate execution of dot products. In general matrix multiplications, the accuracy error is reduced up to 4 orders of magnitude. Furthermore, performance comparisons show that these accuracy improvements do not hinder their execution, as posits run as fast as single-precision floats and exhibit better timing than double-precision floats, thus potentially providing an alternative representation.