Aviso: para depositar documentos, por favor, inicia sesión e identifícate con tu cuenta de correo institucional de la UCM con el botón MI CUENTA UCM. No emplees la opción AUTENTICACIÓN CON CONTRASEÑA
 

Elements for a general memory structure: properties of recurrent neural networks used to form situation models

dc.contributor.authorMakarov Slizneva, Valeriy
dc.contributor.authorSong, Yongli
dc.contributor.authorVelarde, Manuel G.
dc.contributor.authorHübner, David
dc.contributor.authorCruse, Holk
dc.date.accessioned2023-06-20T09:39:38Z
dc.date.available2023-06-20T09:39:38Z
dc.date.issued2008-05
dc.description.abstractWe study how individual memory items are stored assuming that situations given in the environment can be represented in the form of synaptic-like couplings in recurrent neural networks. Previous numerical investigations have shown that specific architectures based on suppression or max units can successfully learn static or dynamic stimuli (situations). Here we provide a theoretical basis concerning the learning process convergence and the network response to a novel stimulus. We show that, besides learning "simple" static situations, a nD network can learn and replicate a sequence of up to n different vectors or frames. We find limits on the learning rate and show coupling matrices developing during training in different cases including expansion of the network into the case of nonlinear interunit coupling. Furthermore, we show that a specific coupling matrix provides low-pass-filter properties to the units, thus connecting networks constructed by static summation units with continuous-time networks. We also show under which conditions such networks can be used to perform arithmetic calculations by means of pattern completion.
dc.description.departmentDepto. de Análisis Matemático y Matemática Aplicada
dc.description.facultyFac. de Ciencias Matemáticas
dc.description.refereedTRUE
dc.description.statuspub
dc.eprint.idhttps://eprints.ucm.es/id/eprint/16654
dc.identifier.doi10.1007/s00422-008-0221-5
dc.identifier.issn0340-1200
dc.identifier.officialurlhttp://www.springerlink.com/content/h3758550745u3617/fulltext.pdf
dc.identifier.relatedurlhttp://www.springerlink.com/
dc.identifier.urihttps://hdl.handle.net/20.500.14352/50132
dc.issue.number5
dc.journal.titleBiological Cybernetics
dc.language.isoeng
dc.page.final395
dc.page.initial371
dc.publisherSpringer Verlag
dc.rights.accessRightsrestricted access
dc.subject.cdu612.8
dc.subject.keywordRecurrent neural network
dc.subject.keywordSituation model
dc.subject.keywordMemory-Learning
dc.subject.ucmNeurociencias (Biológicas)
dc.subject.unesco2490 Neurociencias
dc.titleElements for a general memory structure: properties of recurrent neural networks used to form situation models
dc.typejournal article
dc.volume.number98
dcterms.referencesBeer RD (2006) Parameter space structure of continuous-time recurrent neural networks. Neural Comput 18: 3009–3051 Cruse H, Hübner D (2008) Selforganizing memory: active learning of landmarks used for navigation. Biol Cybern (submitted) Cruse H, Sievers K (2008) A general network structure for learning Pavlovian paradigms (in preparation) Elman JL (1990) Finding structure in time. Cogn Sci 14: 179–211 Feynman R (2001) In: Hawking SW (ed) The universe in a nutshell. Bantam Press, New York Fuster JM (1995) Memory in the cerebral cortex: an empirical approach to neural networks in the human and nonhuman primate. MIT Press, Cambridge Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci 79: 2554–2558 Hopfield JJ (1984) Neurons with graded response have collective computational properties like those of two state neurons. Proc Natl Acad Sci 81: 3088–3092 Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 2:78–80 Kühn S, Beyn WJ, Cruse H (2007) Modelling memory functions with recurrent neural networks consisting of input compensation units: I. Static situations. Biol Cybern 96: 455–470 Kühn S, Cruse H (2007) Modelling memory functions with recurrent neural networks consisting of input compensation units: II. Dynamic situations. Biol Cybern 96: 471–486 Kindermann T, Cruse H (2002) MMC— a new numerical approach to the kinematics of complex manipulators. Mech Mach Theory 37: 375–94 Palm G, Sommer FT (1996) Associative data storage and retrieval in neural networks. In: Domany E, van Hemmen JL, Schulten K(eds) Models of neural networks III. Association, generalization, and representation. Springer, New York, pp 79–18 Pasemann F (2002) Complex dynamics and the structure of small neural networks. Netw: Comput Neural Syst 13: 195–16 Steinkühler U, Cruse H (1998) A holistic model for an internal representation to control the movement of a manipulator with redundant degrees of freedom. Biol Cybern 79: 457–66 Strang G (2003) Introduction to linear algebra. Wellesley - Cambridge Press, Cambridge Tani J (2003) Learning to generate articulated behavior through the bottom-up and the top-down interaction processes. Neural Netw 16: 11–3 Wessnitzer J, Webb B (2006) Multimodal sensory integration in insects—towards insect brain control architectures. Bioinspir Biomim 1: 63–5
dspace.entity.typePublication
relation.isAuthorOfPublicationa5728eb3-1e14-4d59-9d6f-d7aa78f88594
relation.isAuthorOfPublication.latestForDiscoverya5728eb3-1e14-4d59-9d6f-d7aa78f88594

Download

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Makarov11.pdf
Size:
1.02 MB
Format:
Adobe Portable Document Format

Collections