Research

This Research Network addresses challenges relevant to the societally acceptable design, engineering, deployment, and maintenance of next-generation autonomous systems in everyday life and industrial applications.

Key research areas in focus are:

* Embodied AI

* Spatial Cognition and Computation

* Multimodal Interaction

* Physical Human-Robot Collaboration

* Explainable and Interpretable AI

* Standardisation and Legal Aspects

All these areas with an overarching focus on HUMAN-CENTRED COLLABORATIVE AUTONOMY.

APPLICATIONS ADDRESSED / Industrial and Home-Based Robots. Autonomous Vehicles and Assisted Driving. Marine Robotics. Assistive Technologies in Everyday Life (e.g., for rehabilitation, empowerment).


EMBODIED AI FOR ROBOTICS

In the field of robotics, embodied AI refers to the use of AI systems that are designed to be embodied in a physical robot. This means that the AI system is not just a software program running on a computer, but is instead integrated into a robot that has sensors and actuators that allow it to interact with the world. This allows the robot to use its body and environment to gather information and make decisions, just as a human would. For example, an embodied AI robot might be able to navigate a cluttered environment by using its sensors to detect obstacles and its actuators to move around them. By combining AI algorithms with a physical body, embodied AI robots are able to exhibit more intelligent and adaptable behavior than non-embodied AI systems.

SPATIAL COGNITION AND COMPUTATION

Spatial cognition and computation is a field of research that explores the ways in which humans and computers interact with, use and understand spatial information. It encompasses a wide range of topics such as cognitive mapping, navigation, and wayfinding, as well as methods for representing, analyzing, and using spatial data and models. It also explores the ways in which digital technologies, machine learning, and artificial intelligence can be used to augment human spatial cognition and computation capabilities.

MULTIMODAL INTERACTION

With the recent advances in conversational technologies, there is great potential in using devices to engage in less utilitarian and transactional interactions, opening opportunities for applications where rich social mechanisms, natural to human-human communication, are necessary. These applications include social robots capable of assisting the elderly, tutoring students, or acting as therapeutic tools for children with autism. However, socially embodied robots that operate in human-centred environments still lack the social and communicative skills required to engage in natural conversations in an autonomous manner.

PHYSICAL HUMAN-ROBOT COLLABORATION (pHRC)

Genuine pHRC requires robots and humans to share spaces and interact within a common environment. From the point of view of control, novel approaches for combining explicit and implicit multi-modal information with reactive actuation and task planning are required. However, typically, the decision-making of robots is programmed with state machines, where reactive adaptations are not possible. This problem is even more complicated when humans physically interact with the robot to achieve a common goal. An adequate synergy between AI and novel control approaches for pHRC will pave the way for Cobots in real scenarios.

EXPLAINABLE AI

The advances in Artificial Intelligence and Robotics have rapidly increased with the development of novel data- and knowledge-driven methods. The combination of these methods is making it possible that robots, to some extent, to explain their decisions. This research area is known as
Explainable AI, which is gaining great importance in the robotics community. One advantage of such methods is to increase the human’s trust when collaborating with robots, since robots will have the capability of explaining their decisions especially when errors occur or when facing new situations. Although robotics and automation have shown tremendous success in various sectors, such as manufacturing, food service, mining, etc., in other important application areas, such as assembly or household activities, where collaborative tasks are required, robotics solutions have shown limited results.

STANDARDIZATION AND LEGAL ASPECTS

The challenge is to generalize the developed interaction models for the safety assessment of automated vehicles to the HRC domain. Safety is highly influenced by the user’s behaviour and the current dynamic environment, this explains the need to study human factors in different situations such as vehicles and robots. These studies will help to define and generate context models, parameters, and constraints that support the development of novel AI methods for HRC, where human factors and safety are usually omitted in the AI models.