Aviso: para depositar documentos, por favor, inicia sesión e identifícate con tu cuenta de correo institucional de la UCM con el botón MI CUENTA UCM. No emplees la opción AUTENTICACIÓN CON CONTRASEÑA
 

Neural policy style transfer

dc.contributor.authorFernández Fernández, Raúl
dc.contributor.authorGonzalez Victores, Juan
dc.contributor.authorGago Muñoz, Jennifer
dc.contributor.authorEstévez Fernández, David
dc.date.accessioned2024-01-30T09:37:12Z
dc.date.available2024-01-30T09:37:12Z
dc.date.issued2022-03-01
dc.descriptionEstá depositada la versión preprint del artículo remitido a Elsevier
dc.description.abstractStyle Transfer has been proposed in a number of fields: fine arts, natural language processing, and fixed trajectories. We scale this concept up to control policies within a Deep Reinforcement Learning infrastructure. Each network is trained to maximize the expected reward, which typically encodes the goal of an action, and can be described as the content. The expressive power of deep neural networks enables encoding a secondary task, which can be described as the style. The Neural Policy Style Transfer (NPST)1 algorithm is proposed to transfer the style of one policy to another, while maintaining the content of the latter. Different policies are defined via Deep Q-Network architectures. These models are trained using demonstrations through Inverse Reinforcement Learning. Two different sets of user demonstrations are performed, one for content and other for style. Different styles are encoded as defined by user demonstrations. The generated policy is the result of feeding a content policy and a style policy to the NPST algorithm. Experiments are performed in a catch-ball game inspired by the Deep Reinforcement Learning classical Atari games; and a real-world painting scenario with a full-sized humanoid robot, based on previous works of the authors. The implementation of three different Q-Network architectures (Shallow, Deep and Deep Recurrent Q-Network) to encode the policies within the NPST framework is proposed and the results obtained in the experiments with each of these architectures compared.eng
dc.description.departmentSección Deptal. de Arquitectura de Computadores y Automática (Físicas)
dc.description.facultyFac. de Ciencias Físicas
dc.description.refereedTRUE
dc.description.sponsorshipComunidad Autónoma de Madrid
dc.description.sponsorshipUnión Europea
dc.description.sponsorshipRoboCity2030-DIH-CM Madrid Robotics Digital Innovation Hub
dc.description.statuspub
dc.identifier.citationFernandez-Fernandez, Raul, et al. «Neural Policy Style Transfer». Cognitive Systems Research, vol. 72, marzo de 2022, pp. 23-32. DOI.org (Crossref), https://doi.org/10.1016/j.cogsys.2021.11.003.
dc.identifier.doi10.1016/j.cogsys.2021.11.003
dc.identifier.issn1389-0417
dc.identifier.officialurlhttps://doi.org/10.1016/j.cogsys.2021.11.003
dc.identifier.urihttps://hdl.handle.net/20.500.14352/96361
dc.journal.titleCognitive Systems Research
dc.language.isoeng
dc.page.final32
dc.page.initial23
dc.publisherElsevier
dc.relation.projectIDinfo:eu-repo/grantAgreement/S2018/NMT-4331
dc.rights.accessRightsopen access
dc.subject.cdu004.8
dc.subject.keywordStyle Transfer
dc.subject.keywordDeep reinforcement learning
dc.subject.keywordRobotics
dc.subject.keywordDeep learning
dc.subject.ucmInteligencia artificial (Informática)
dc.subject.unesco1203.04 Inteligencia Artificial
dc.titleNeural policy style transfer
dc.typejournal article
dc.type.hasVersionVoR
dc.volume.number72
dspace.entity.typePublication

Download

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
fernandezfernandez2022neural-preprint.pdf
Size:
1.06 MB
Format:
Adobe Portable Document Format

Collections