Caballero Roldán, Rafael

Profile Picture
First Name
Last Name
Caballero Roldán
Universidad Complutense de Madrid
Faculty / Institute
Sistemas Informáticos y Computación
Lenguajes y Sistemas Informáticos
UCM identifierORCIDScopus Author IDDialnet IDGoogle Scholar ID

Search Results

Now showing 1 - 8 of 8
  • Publication
    Mejora del aprendizaje de SQL con realimentación semántica
    (2018-06-13) Sáenz Pérez, Fernando; Caballero Roldán, Rafael; García Ruiz, Yolanda; Garméndia Salvador, Luis
  • Publication
    Two type extensions for the constraint modelling language MiniZinc
    (Elsevier, 2015-11-01) Caballero Roldán, Rafael; Stuckey, Peter J.; Tenorio Fornés, Antonio
    In this paper we present two type extensions for the modelling language MiniZinc that allow the representation of some problems in a more natural way. The first proposal, called MiniZinc? , extends existing types with additional values. The user can specify both the extension of a predefined type with new values, and the behavior of the operations with relation to the new types. We illustrate the usage of MiniZinc? to model SQL-like problems with integer variables extended with NULL values. The second extension, MiniZinc+, introduces union types in the language. This allows defining recursive types such as trees, which are very useful for modelling problems that involve complex structures. A new case statement is introduced to select the different components of union type terms. The paper shows how a model defined using these extensions can be transformed into a MiniZinc model which is equivalent to the original model.
  • Publication
    Short term cloud nowcasting for a solar power plant based on irradiance historical Data
    (Universidad Nacional de La Plata, 2018-12) Caballero Roldán, Rafael; Zarzalejo Tirado, Luis Fernando; Otero Martín, Álvaro; Piñuel Moreno, Luis; Wilbert, Stefan
    This work considers the problem of forecasting the normal solar irradiance with high spatial and temporal resolution (5 minutes). The forecasting is based on a dataset registered during one year from the high resolution radiometric network at a operational solar power plan at Almeria, Spain. In particular, we show a technique for forecasting the irradiance in the next few minutes from the irradiance values obtained on the previous hour. Our proposal employs a type of recurrent neural network known as LSTM, which can learn complex patterns and that has proven its usability for forecasting temporal series. The results show a reasonable improvement with respect to other prediction methods typically employed in the studies of temporal series.
  • Publication
    Digital Activism Masked. The Fridays for Future movement and the "Global day of climate action": testing social function and framing typologies of claims on Twitter
    (2022) Fernández-Zubieta, Ana; Guevara Gil, Juan Antonio; Caballero Roldán, Rafael; Robles Morales, José Manuel
    This article analyses the Fridays for Future (FFF) movement and their online mobilization around the Global Day of Climate Action on September 25th, 2020. Due to the Covid-19 pandemic this event is a unique opportunity to study digital activism as marchers were considered not appropriate. Using the Twitter’s API with keywords “#climateStrike”, “#FridaysForFuture”, we collected 111,844 unique tweets and retweets from 47,892 unique users. We use two typologies based on social media activism and framing literature to understand the main function of tweets —information, opinion, mobilization and blame— and frames —diagnosis, prognosis, motivational. We also analyze its relationship and test its automated-classification potential. To do so we manually coded a randomly selected sample of 950 tweets that are used as input for the automated-classification process (SVMs algorithm with balancing classification techniques). We find that the Covid-19 pandemic appears not to have increased the mobilization function of tweets, as the frequencies of mobilization tweets were low. We also find a balanced diversity of framing tasks, with an important number of tweets that envisaged solution on legislation and policy changes. We find that both typologies are not independent. The automated data classification model performed well, especially across social function typology and the “other” category. This indicates that these tools could help researchers working with social media data to process the information across categories that are currently mainly processed manually, enlarging their final sample sizes
  • Publication
    Implementación de un entorno de aprendizaje colaborativo de lenguajes de programación mediante traducción
    (2016-01) Caballero Roldán, Rafael; Martín Martín, Enrique; Montenegro Montes, Manuel; Riesco Rodríguez, Adrián; Tamarit Muñoz, Salvador
    Memoria del PIMCD 32/2015, donde presentamos una herramienta colaborativa para aprender lenguajes de traducción mediante traducción llamada DuoCode.
  • Publication
    A unified framework for declarative debugging and testing
    (Elsevier, 2020-09-22) Tamarit, Salvador; Caballero Roldán, Rafael; Riesco Rodríguez, Adrián; Martín Martín, Enrique
    Context: Debugging is the most challenging and time consuming task in software development. However, it is not properly integrated in the software development cycle, because the result of so much effort is not available in further iterations of the cycle, and the debugging process itself does not benefit from the outcome of other phases such as testing. Objective: We propose to integrate debugging and testing within a single unified framework where each phase generates useful information for the other and the outcomes of each phase are reused. Method: We consider a declarative debugging setting that employs tests to automatically entail the validity of some subcomputations, thus decreasing the time and effort needed to find a bug. Additionally, the debugger stores as new tests the information collected from the user during the debugging phase. This information becomes part of the program test suite, and can be used in future debugging sessions, and also as regression tests. Results: We define a general framework where declarative debugging establishes a bidirectional collaboration with testing. The new setting preserves the properties of the underlying declarative debugging framework (weak completeness and soundness) while generating test cases that can be used later in other debugging sessions or even in other cycles of the software development. The proposed framework is general enough to be instantiated to very different programming languages: Erlang (functional), Java (imperative, object-oriented), and SQL (data query); and the experimental results obtained for Erlang programs validate the effectiveness of the framework. Conclusion: We propose a general unified framework for debugging and testing that simplifies each phase and maximizes the reusability of the outcomes in the different phases of the software development cycle, therefore reducing the overall effort.
  • Publication
    A core Erlang semantics for declarative debugging
    (Elsevier, 2019-10-01) Martin-Martin, Enrique; Tamarit, Salvador; Riesco Rodríguez, Adrián; Caballero Roldán, Rafael
    One of the main advantages of declarative languages is their clearly established formal semantics, that allows programmers to reason about the properties of programs and to establish the correctness of tools. In particular, declarative debugging is a technique that analyses the proof trees of computations to locate bugs in programs. However, in the case of commercial declarative languages such as the functional language Erlang, sometimes the semantics is only informally defined, and this precludes these possibilities. Moreover, defining semantics for these languages is far from trivial because they include complex features needed in real applications, such as concurrency. In this paper we define a semantics for Core Erlang, the intermediate language underlying Erlang programs. We focus on the problem of concurrency and show how a medium-sized-step calculus, that avoids the details of small-step semantics but still captures the most common program errors, can be used to define an algorithmic debugger that is sound and complete.
  • Publication
    Predicting students' knowledge after playing a serious game based on learning analytics data: A case study
    (Wiley, 2019-12) Alonso Fernández, Cristina; Martínez Ortiz, Iván; Caballero Roldán, Rafael; Freire Morán, Manuel; Fernández Manjón, Baltasar
    Serious games have proven to be a powerful tool in education to engage, motivate, and help students learn. However, the change in student knowledge after playing games is usually measured with traditional (paper) prequestionnaires–postquestionnaires. We propose a combination of game learning analytics and data mining techniques to predict knowledge change based on in-game student interactions. We have tested this approach in a case study for which we have conducted preexperiments–postexperiments with 227 students playing a previously validated serious game on first aid techniques. We collected student interaction data while students played, using a game learning analytics infrastructure and the standard data format Experience API for Serious Games. After data collection, we developed and tested prediction models to determine whether knowledge, given as posttest results, can be accurately predicted. Additionally, we compared models both with and without pretest information to determine the importance of previous knowledge when predicting postgame knowledge. The high accuracy of the obtained prediction models suggests that serious games can be used not only to teach but also to measure knowledge acquisition after playing. This will simplify serious games application for educational settings and especially in the classroom easing teachers' evaluation tasks.