sábado, 13 de julio de 2019

Welcome / Bienvenid@


Welcome to my blog.

My name is Juan Jesús Roldán Gómez. I am a postdoctoral researcher at the Centre for Automation and Robotics (UPM-CSIC). I got a PhD in Automation and Robotics from the Technical University of Madrid (2014-2018). Previously, I studied Bsc+MSc in Industrial Engineering with specialization in Automation and Electronics (2006-2012) and MSc in Automation and Robotics (2013-2014) at the same university. I have taken part in robotics projects with SENER and Airbus Defence and Space, as well as I have performed a research stay at the University of Luxembourg. During the last five years, I have researched about security systems for aerial robots, surveillance with teams of ground robots and application of robots to agriculture. Currently, I am interested in multi-robot adaptive and immersive interfaces and robot swarms for monitoring cities, as well as in data mining and machine learning techniques.

If you are interested in my work, please send me an email:


Until next time!

*     *     *

Bienvenid@ a mi blog.

Mi nombre es Juan Jesús Roldán Gómez. Soy un investigador postdoctoral en el Centro de Automática y Robótica (UPM-CSIC). Hice el doctorado en Automática y Robótica en la Universidad Politécnica de Madrid (2014-2018). Anteriormente, estudié Ingeniería Industrial con especialidad en Automática y Electrónica (2006-2012) y Máster en Automática y Robótica (2013-2014) en la misma universidad. He participado en proyectos de robótica con SENER y Airbus Defence & Space y también he realizado una estancia de investigación en la Universidad de Luxemburgo. Durante los últimos cinco años, he investigado sobre sistemas de seguridad para robots aéreos, vigilancia con equipos de robots terrestres y aplicación de robots en agricultura. Actualmente, estoy interesado en las interfaces adaptativas e inmersivas para flotas de robots y el uso de enjambres de robots para monitorizar ciudades, así como en aprender más sobre técnicas de minería de datos y aprendizaje automático.

Si estás interesad@ en mi trabajo, por favor envíame un correo:


¡Hasta la próxima!

viernes, 12 de julio de 2019

Bringing Adaptive and Immersive Interfaces to Real-World Multi-Robot Scenarios: Application to Surveillance and Intervention in Infrastructures



Multiple robot missions imply a series of challenges for single human operators, such as managing high workloads or maintaining a correct level of situational awareness. Conventional interfaces are not prepared to face these challenges; however, new concepts have arisen to cover this need, such as adaptive and immersive interfaces. This paper reports the design and development of an adaptive and immersive interface, as well as a complete set of experiments carried out to establish comparisons with a conventional one. The interface object of study has been developed using virtual reality to bring operators into scenarios and allow an intuitive commanding of robots. Additionally, it is able to recognize the mission’s state and show hints to the operators. The experiments were performed in both outdoor and indoor scenarios recreating an intervention after an accident in critical infrastructure. The results show the potential of adaptive and immersive interfaces in the improvement of workload, situational awareness and performance of operators in multi-robot missions.


I'm very proud to see the result of many days of hard work during the past year, and I want to thank my friends Elena and Pablo for their help to make this real.

J.J. Roldán, E. Peña-Tapia, P. Garcia-Aunon, J. del Cerro and A. Barrientos. Bringing adaptive & immersive interfaces to real-world multi-robot scenarios: application to surveillance and intervention in infrastructures. IEEE Access, 7 (1), 86319-86335, 2019. Impact Factor (JCR, 2018): 4.098, Q1. Article

martes, 9 de julio de 2019

Press Start to Play: Classifying Multi-Robot Operators and Predicting Their Strategies through a Videogame

The paper "Press Start to Play: Classifying Multi-Robot Operators and Predicting Their Strategies through a Videogame" has been published by Robotics, an open access journal published by MDPI and obviously focused on robotics.


One of the active challenges in multi-robot missions is related to managing operator workload and situational awareness. Currently, the operators are trained to use interfaces, but in the near future this can be turned inside out: the interfaces will adapt to operators so as to facilitate their tasks. To this end, the interfaces should manage models of operators and adapt the information to their states and preferences. This work proposes a videogame-based approach to classify operator behavior and predict their actions in order to improve teleoperated multi-robot missions. First, groups of operators are generated according to their strategies by means of clustering algorithms. Second, the operators’ strategies are predicted, taking into account their models. Multiple information sources and modeling methods are used to determine the approach that maximizes the mission goal. The results demonstrate that predictions based on previous data from single operators increase the probability of success in teleoperated multi-robot missions by 19%, whereas predictions based on operator clusters increase this probability of success by 28%. 


J11: J.J. Roldán, V. Díaz-Maroto, J. Real, P.R. Palafox, J. Valente, M. Garzón and A. Barrientos. Press start to play: classifying multi-robot operators and predicting their strategies through a videogame. Robotics, 8 (3), 53-67, 2019. Impact Factor (Scopus, 2018): 1.53, Q2. Article

sábado, 29 de junio de 2019

Robust Visual-Aided Autonomous Takeoff, Tracking, and Landing of a Small UAV on a Moving Landing Platform for Life-Long Operation



Robot cooperation is key in Search and Rescue (SaR) tasks. Frequently, these tasks take place in complex scenarios affected by different types of disasters, so an aerial viewpoint is useful for autonomous navigation or human tele-operation. In such cases, an Unmanned Aerial Vehicle (UAV) in cooperation with an Unmanned Ground Vehicle (UGV) can provide valuable insight into the area. To carry out its work successfully, such as multi-robot system requires the autonomous takeoff, tracking, and landing of the UAV on the moving UGV. Furthermore, it needs to be robust and capable of life-long operation. In this paper, we present an autonomous system that enables a UAV to take off autonomously from a moving landing platform, locate it using visual cues, follow it, and robustly land on it. The system relies on a finite state machine, which together with a novel re-localization module allows the system to operate robustly for extended periods of time and to recover from potential failed landing maneuvers. Two approaches for tracking and landing are developed, implemented, and tested. The first variant is based on a novel height-adaptive PID controller that uses the current position of the landing platform as the target. The second one combines this height-adaptive PID controller with a Kalman filter in order to predict the future positions of the platform and provide them as input to the PID controller. This facilitates tracking and, mainly, landing. Both the system as a whole and the re-localization module in particular have been tested extensively in a simulated environment (Gazebo). We also present a qualitative evaluation of the system on the real robotic platforms, demonstrating that our system can also be deployed on real robotic platforms. For the benefit of the community, we make our software open source.


I would like to thank Pablo R. Palafox, Mario Garzón and João Valente to give me the opportunity to collaborate with them and get this publication.

P.R. Palafox, M. Garzón, J. Valente, J.J. Roldán and A. Barrientos. Robust Visual-Aided Autonomous Takeoff, Tracking and Landing of a small UAV on a Moving Landing Platform for Life-Long Operation. Applied Sciences, 9 (13), 2661, 2019. Impact Factor (2018): 2.217, Q2. Article

miércoles, 12 de junio de 2019

SUREVEG Project: Developing robots for strip-cropping systems

The SUREVEG project proposes the development and application of new organic cropping systems using strip-cropping and fertility strategies to improve resilience, system sustainability, local nutrient recycling and soil carbon storage. The project has three main goals: 1) Designing and testing strip-cropping systems in vegetable producing countries at different geographical locations in Europe, 2) Developing and testing soil-improvers and fertilizers based on pre-treated organic plant residues, and 3) Developing and testing smart technologies for management of strip-cropping systems.

The Technical University of Madrid and the Centre for Automation and Robotics are involved in the third goal: Smart machinery for strip-cropping systems. This work aims at developing a robotic tool for the automation of field operations in strip-cropping systems, including the adequate sensors to collect valuable data and actuators to apply precise fertilization. Specifically, it comprises four goals: 1) Designing a multi-purpose robotic tool, 2) Developing sensing systems and algorithms, 3) Developing an actuation system, and 4) Implementing motion planning strategies.

Here you can see the first prototype (I have worked on the manipulator robot):


viernes, 10 de mayo de 2019

A training system for Industry 4.0 operators in complex assemblies based on virtual reality and process mining


Industry 4.0 aims at integrating machines and operators through network connections and information management. It proposes the use of a set of technologies in industry, such as data analysis, Internet of Things, cloud computing, cooperative robots, and immersive technologies. This paper presents a training system for industrial operators in assembly tasks, which takes advantage of tools such as virtual reality and process mining. First, expert workers use an immersive interface to perform assemblies according to their experience. Then, process mining algorithms are applied to obtain assembly models from event logs. Finally, trainee workers use an improved immersive interface with hints to learn the assemblies that the expert workers introduced in the system. A toy example has been developed with building blocks and tests have been performed with a set of volunteers. The results show that the proposed training system, based on process mining and virtual reality, is competitive against conventional alternatives. Furthermore, user evaluations are better in terms of mental demand, perception, learning, results, and performance.



J.J. Roldán, E. Crespo, A. Martín-Barrio, E. Peña-Tapia and A. Barrientos. A training system for Industry 4.0 operators in complex assemblies based on virtual reality and process mining. Robotics and Computer-Integrated Manufacturing, 59, 305-316, 2019. Impact Factor (2018): 3.464, Q1. Article.

miércoles, 10 de abril de 2019

Showing our robots in IRM2019

I have taken part in the stand of the Robotics and Cybernetics Research Group (RobCib) in the Industriales Research Meeting (IRM2019), an event organized by the Industrial Engineering School of the Technical University of Madrid to disseminate the research. In this stand, we have shown the different robots of the group: a mobile manipulator, three search and rescue robots and a hyper-redundant robot. Additionally, we took some volunteers to test our virtual reality interface for monitoring the state of a smart city.