Instalación Interactiva de Música Generativa
Loading...
Official URL
Full text at PDC
Publication date
2024
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Citation
Abstract
En este proyecto hemos desarrollado una instalación interactiva capaz de detectar la posición y los gestos de las manos de un usuario gracias a una cámara y una red neuronal entrenada. Con estos gestos, el usuario interactúa con una música ambiental que evoluciona en directo mediante va sonando. Esto lo hemos conseguido gracias a un modelo de inferencia pre-entrenado que hemos personalizado para nuestro objetivo.
La música que suena sigue la filosofía de la música generativa, y el sonido se crea mediante síntesis granular; todo esto gracias a diferentes programas que hemos conectado entre sí, de manera que los datos obtenidos de los gestos sean traducidos en variaciones de los parámetros de la música a tiempo real.
De esta manera, se generan texturas sonoras en base a unos audios y presets preguardados y, gracias a los mensajes recibidos de la red neuronal, se puede interactuar de múltiples maneras con este sonido generado. Por otro lado, a este sonido se le aplican efectos de reverb y ecualización, dándole mayor riqueza. El resultado: tenemos establecidas las bases para generar un ambiente sonoro evolutivo en el cual se involucra el usuario, creando su propio sonido haciendo gestos y movimientos en el aire.
Este proyecto pretende acercar a un usuario inexperto al mundo de la música ambiental y permitirle crear, utilizando la Inteligencia Artificial no como sustituto, sino como herramienta creativa. Se trata de un trabajo con el aspecto técnico de la programación y el entrenamiento de un modelo, así como el aspecto experimental y creativo de la informática musical. También se pretende explorar el potencial de la fusión de todas estas herramientas y lo compacto y portable que puede ser el resultado.
In this project we have developed an interactive installation capable of detecting the position and hand gestures of a user thanks to a camera and a trained neural network. With these gestures, the user interacts with an ambient music that evolves live as it plays. We have achieved this thanks to a pre-trained inference model that we have customized for our objective. The music played follows the philosophy of generative music, and the sound is created by granular synthesis; all this thanks to different programs that we have connected together, so that the data obtained from the gestures are translated into variations of the music parameters in real time. In this way, sound textures are generated based on pre-saved audio and presets and, thanks to the messages received from the neural network, it is possible to interact in multiple ways with this generated sound. On the other hand, reverb and equalization effects are applied to this sound, giving it greater richness. The result: we have established the basis to generate an evolving sound environment in which the user is involved, creating his own sound by making gestures and movements in the air. This project aims to bring an inexperienced user closer to the world of ambient music and allow him to create, using Artificial Intelligence not as a substitute, but as a creative tool. It involves working with the technical aspect of programming and training a model, as well as the experimental and creative aspect of music computing. It is also intended to explore the potential of merging all these tools and how compact and portable the result can be.
In this project we have developed an interactive installation capable of detecting the position and hand gestures of a user thanks to a camera and a trained neural network. With these gestures, the user interacts with an ambient music that evolves live as it plays. We have achieved this thanks to a pre-trained inference model that we have customized for our objective. The music played follows the philosophy of generative music, and the sound is created by granular synthesis; all this thanks to different programs that we have connected together, so that the data obtained from the gestures are translated into variations of the music parameters in real time. In this way, sound textures are generated based on pre-saved audio and presets and, thanks to the messages received from the neural network, it is possible to interact in multiple ways with this generated sound. On the other hand, reverb and equalization effects are applied to this sound, giving it greater richness. The result: we have established the basis to generate an evolving sound environment in which the user is involved, creating his own sound by making gestures and movements in the air. This project aims to bring an inexperienced user closer to the world of ambient music and allow him to create, using Artificial Intelligence not as a substitute, but as a creative tool. It involves working with the technical aspect of programming and training a model, as well as the experimental and creative aspect of music computing. It is also intended to explore the potential of merging all these tools and how compact and portable the result can be.
Description
Trabajo de Fin de Grado en Ingeniería Informática, Facultad Informática UCM, Departamento de Arquitectura de Computadores y Automática, Departamento de Sistemas Informáticos y Computación, Curso 2023/2024.