Cognitive Robotics

the process of endowing a robot with intelligent behaviors thanks to a processing architecture that will allow it to achieve complex goals in human complex world

Attention System

progressive improvement of the social capabilities of the robots based on principles of human-human interaction

Motor Control Theory

On top of cognitive models, robots need to show human understandable behaviours and interpret human behaviours

Video: log-polar visual attention system and oculomotor control

The humanoid robot iCub extracts features of the object in the scene, computes the the Winner-Take-All salient stimulus and performs saccades and vergence oculomotor actions to segment the object at zero disparity

log-polar visual attention system and oculomotor control

Video: biological motion in control of the humanoid robot end-effector

The biological motion as programmed by us on the humanoid robot activates the response in insula in the human observer proving that the emotional trait of the action is correctly communicated to the observer

biological motion in control of the humanoid robot end-effector

Video: AudioVisual cognitive architecture for autonomous learning of face localisation by a humanoid robot

The biologically-inspired multisensory cognitive model of perception guides the deep learning in order to enable optimal social behaviour in humanoid robot iCub

AudioVisual cognitive architecture for autonomous learning

Visual and Auditory Perception

Robots designed to be present in our lives need to share with humans cognitive models to promote reciprocal understanding

Predictive coding of spatial configuration of complex environment

The recursive and continuous enhancement cycles using Bayes` theorem form new predictions of the world for the humanoid robot

Optimal oculomotor control improves visual perception for the humanoid robot

Specific motor strategies guided by attention (proVISION) to solve localization based on visual perception

Publication Selection

Main publications in this research field resulting from state-of-the-art research and technological breakthroughs

2020 Audiovisual cognitive architecture for autonomous learning of face localisation by a Humanoid Robot

Gonzalez Billandon J., Sciutti A., Tata M., Sandini G., Rea F., International Conference on Robotics and Automation, ICRA2020

Abstract

An integrated model for the coordination of whole body movements of a humanoid robot with a compliant ankle similar to the human case is described. It includes a synergy formation part, which takes into account the motor redundancy of the body model, and an intermittent controller, which stabilizes in a robust way postural sway movements, thus combining the hip strategy with ankle strategy.

View

Abstract

Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend.

View

Abstract

Nowadays, robots need to be able to interact with humans and objects in a flexible way and should be able to share the same knowledge (physical and social) of the human counterpart. Therefore, there is a need for a framework for expressing and sharing knowledge in a meaningful way by building the world model. In this paper, we propose a new framework for human–robot interaction using ontologies as powerful way of representing information which promote the sharing of meaningful knowledge between different objects. Furthermore, ontologies are powerful notions able to conceptualise the world in which the object such as Robot is situated. In this research, ontology is considered as improved solution to the grounding problem and enables interoperability between human and robot. The proposed system has been evaluated on a large number of test cases; results were very promising and support the implementation of the solution

View

Abstract

Tantalizing evidence derived from psychophysics and developmental psychology experiments has shown that attention is task-dependent. Two characteristics of human control of attention are very relevant for humanoid robots, namely, the ability to predict the context (task dependence) from the observed stimuli, and the ability to learn an appropriate movement strategy perhaps over developmental time scales. In this paper we aim at implementing these features to control attention in a humanoid robot by including a set of trajectory predictors in the simple but effective form of Kalman filters, and, more importantly, a reinforcement learning based process that utilizes the predictors and the complete set of actions of the robot repertoire to generate a suitably optimal action sequence. Preliminary experiments show that the system indeed works correctly.

View

Abstract

Seeing the world through the eyes of a child is always difficult. Designing a robot that might be liked and accepted by young users is therefore particularly complicated. We have investigated children's opinions on which features are most important in an interactive robot during a popular scientific event where we exhibited the iCub humanoid robot to a mixed public of various ages. From the observation of the participants' reactions to various robot demonstrations and from a dedicated ranking game, we found that children's requirements for a robot companion change sensibly with age. Before 9 years of age children give more relevance to a human-like appearance, while older kids and adults pay more attention to robot action skills. Additionally, the possibility to see and interact with a robot has an impact on children's judgments, especially convincing the youngest to consider also perceptual and motor abilities in a robot, rather than just its shape. These results suggest that robot design needs to take into account the different prior beliefs that children and adults might have when they see a robot with a human-like shape.

View

Collaboration in the specific research

Collaboration activated thanks to the research carried on in the field

Products and Draft Patents

AuditoryAI

The invention is an new process to register the existinance of sound objects,localize them, prioritize them, and direct the behaviour of an agent (e.g. a robot) towards them.

gazeTracking

The product endows intelligent system with one camera to infer the gaze direction by looking at the head orientation and the iris position.

-----------------------

------------------------

-------------------------

Testimonials

-------------------------------------------------

Prof. Samia Nefti Meziani/Robotics and Automation, Salford University

--------------------------------------------------

Prof. Davide Brugali/ Robotics, Universita degli Studi di Bergamo

Partners

In the quest of enabling function robots in our lives, trustful Partners shared the road to impacting solutions.

Phone

t:+39 010 71781420