Home » Research Projects » European Projects

European Projects

These are projects funded by European Union agencies in which VisLab is involved, ordered by starting date (descending).

ACTICIPATE — Action understanding in human and robot dyadic interaction

Funding: European Commission. Reference: H2020-EU.1.3.2. – 752611. Dates: June 2016 – Aug. 2018. Website: under construction

Humans have fascinating skills for grasping and manipulation of objects, even in complex, dynamic environments, and execute coordinated movements of the head, eyes, arms, and hands, in order to accomplish everyday tasks. When working on a shared space, during dyadic interaction tasks, humans engage in non-verbal communication, by understanding and anticipating the actions of working partners, and coupling their actions in a meaningful way. The key to this mind-boggling performance is two-fold: (i) a capacity to adapt and plan the motion according to unexpected events in the environment, (ii) and the use of a common motor repertoire and action model, to understand and anticipate the actions and intentions of others as if they were our own. Despite decades of progress, robots are still far from the level of performance that would enable them to work with humans in routine activities.

ACTICIPATE addresses the challenge of designing robots that can share workspaces and co-work with humans. We rely on human experiments to learn a model/controller that allows a humanoid to generate and adapt its upper body motion, in dynamic environments, during reaching and manipulation tasks, and to understand, predict and anticipate the actions of a human co-worker, as needed in manufacturing, assistive and service robotics, and domestic applications. These application scenarios call for three main capabilities that will be tackled in ACTICIPATE: a motion generation mechanism (primitives), with a built-in capacity for instant reaction to changes in dynamic environments; a framework to combine primitives and execute coordinated movements of head, eyes, arm and hand, in a way similar (thus predictable) to human movements, and model the action/movement coupling between co-workers in dyadic interaction tasks; and the ability to understand and anticipate human actions, based on a common motor system/model that is also used to synthesize the robot’s goal-directed actions in a natural way.

POETICON++ — Robots Need Language: A Computational Mechanism for Generalisation and Generation of New Behaviours in Robots

Funding: European Commission. Reference: FP7-ICT-288382. Dates: Jan. 2012 – Dec. 2015. Website: http://www.poeticon.eu/

The main objective of POETICON++ is the development of a computational mechanism for generalisation of motor programs and visual experiences for robots. To this end, it will integrate natural language and visual action/object recognition tools with motor skills and learning abilities, in the iCub humanoid. Tools and skills will engage in a cognitive dialogue for novel action generalisation and creativity experiments in two scenarios of “everyday activities”, comprising of (a) behaviour generation through verbal instruction, and (b) visual scene understanding. POETICON++ views natural language as a necessary tool for endowing artificial agents with generalisation and creativity in real world environments.

Dico(re)²s — Discount Coupon Recommendation and Redemption System

Funding: European Commission. Reference: FP7-SME-262451. Dates: July 2011 – June 2013. Website: http://www.dicore2s.com/

Dico(re)²s develops and deploys a coupon-based discount campaign platform to provide consumers and retailers/manufacturers with a personalized environment for maximum customer satisfaction and business profitability.

First-MM — Flexible Skill Acquisition and Intuitive Robot Tasking for Mobile Manipulation

Funding: European Commission. Reference: FP7-ICT-248258. Dates: Feb. 2010 – June 2013. Website: http://www.first-mm.eu/

The goal of First-MM is to build the basis for a new generation of autonomous mobile manipulation robots that can flexibly be instructed to perform complex manipulation and transportation tasks. The project will develop a novel robot programming environment that allows even non-expert users to specify complex manipulation tasks in real-world environments. In addition to a task specification language, the environment includes concepts for probabilistic inference and for learning manipulation skills from demonstration and from experience.

RoboSoM — A Robotic Sense of Movement

Funding: European Commission. Reference: FP7-ICT-248366. Dates: Dec. 2009 – May 2013. Website: http://www.robosom.eu/

This project aims at advancing the state-of-the-art in motion perception and control in a humanoid robot. The fundamental principles to explore are rooted on theories of human perception: Expected Perception (EP) and the Vestibular Unified Reference Frame.

HANDLE — Developmental Pathway Towards Autonomy and Dexterity in Robot In-Hand Manipulation

Funding: European Commission. Reference: FP7-ICT-231640. Dates: Feb. 2009 – Feb. 2013. Website: http://www.handle-project.eu/

This project aims at providing advanced perception and control capabilities to the Shadow Robot hand, one of the most advanced robotic hands in mechanical terms. We follow some paradigms of human learning to make the system able to grasp and manipulate objects of different characteristics: learning by imitation and by self-exploration. Different object characteristics and usages (object affordances) determine the way the hand will perform the grasping and manipulation actions.

URUS — Ubiquitous Networking Robotics in Urban Settings

Funding: European Commission. Reference: FP6-IST-045062. Dates: Dec. 2006 – Nov. 2009. Website: http://urus.upc.edu/

In this project we want to analyze and test the idea of incorporating a network of robots (robots, intelligent sensors, devices and communications) in order to improve life quality in urban areas. The URUS project is focused in designing a network of robots that in a cooperative way interact with human beings and the environment for tasks of assistance, transportation of goods, and surveillance in urban areas. Specifically, our objective is to design and develop a cognitive network robot architecture that integrates cooperating urban robots, intelligent sensors, intelligent devices and communications.

CONTACT — Learning and Development of Contextual Action

Funding: European Commission. Reference: FP6-NEST-5010. Dates: Sept. 2005 – Feb. 2009. Website: http://wiki.icub.org/contact

The links between different aspects of human learning are being explored by the CONTACT project. Partners with expertise in robotics, neuroscience and child development are exploring the parallels between learning to speak and learning to make gestures involved in both communication and manipulation. This is fundamental research, but may have practical applications in the design of artificial  intelligence systems and the diagnosis and treatment of learning disabilities.

RobotCub — Robotic Open-Architecture Technology for Cognition, Understanding and Behaviour

Funding: European Commission. Reference: FP6-IST-004370. Dates: Sept. 2004 – Jan. 2010. Website: http://www.robotcub.org/

The main goal of RobotCub is to study cognition through the implementation of a humanoid robot the size of a 3.5 year old child: the iCub. This is an open project in many different ways: we distribute the platform openly, we develop software open-source, and we are open to including new partners and form collaboration worldwide.

CAVIAR — Context-Aware Vision Using Image-Based Active Recognition

Funding: European Commission. Reference: FP6-IST-2001-37540. Dates: Oct. 2002 – Sept. 2005. Website: http://homepages.inf.ed.ac.uk/rbf/CAVIAR/

The main objective of CAVIAR is to address the scientific question: Can rich local image descriptions from foveal and other image sensors, selected by a hierarchal visual attention process and guided and processed using task, scene, function and object contextual knowledge improve image-based recognition processes?

MIRROR — Mirror Neurons for Recognition

Funding: European Commission. Reference: FP5-FET-2000-28159. Dates: 2001 – 2004. Website: n/a. Partners: DIST-University of Genova (I), University of Ferrara (I), Dept.of Psychology, University of Umeå (SE).

mirror

The goals of MIRROR are: 1) to realize an artificial system that learns to communicate with humans by means of body gestures and 2) to study the mechanisms used by the brain to learn and represent gestures. The biological base is the existence in primates pre-motor cortex of a motor resonant system, called mirror neurons, activated both during execution of goal directed actions and during observation of similar actions performed by others. This unified representation may sub serve the learning of goal directed actions during development and the recognition of motor acts, when visually perceived. In MIRROR we investigate this ontogenetic pathway in two ways: 1) by realizing a system that learns to move AND to understand movements on the basis of the visually perceived motion and the associated motor commands and 2) by correlated electrophysiological experiments.

OMNIVIEWS — Omnidirectional Vision System

Funding: European Commission. Reference: FP5-FET-1999-29017. Dates: Sept. 2000 – Sept. 2001. Website: http://cmp.felk.cvut.cz/projects/omniviews/. Partners: DIST-University of Genova (I), Czech Technical University (CZ).

omniviews

The goal of the project is to integrate optical, optoelectronic, hardware, and software technology to realize a smart visual sensor, and to demonstrate its utility in key application areas. In particular our intention is to design and realize a low-cost, miniaturized digital camera acquiring panoramic (360 deg) images and performing a useful low-level processing on the incoming stream of images in real-time. Target applications include surveillance, quality control and mobile robot and vehicle navigation.

NARVAL — Navigation of Autonomous Robots Via Active Environmental Perception

Funding: European Commission. Reference: FP5-Esprit LTR-30185. Dates: June 1998 – May 2001. Website: http://www.isr.ist.utl.pt/vislab/NARVAL/index.htm. Partners: CNRS-Université de Nice Sophia Antipolis (FR), Thomson Sintra ASM (FR), DIST-University of Genoa (I).

rov_mar

The goal of this project is to develop non-intruding and reliable navigation systems giving the robot the ability to select natural landmarks, and to navigate with respect to them, extending in this way the autonomy range of autonomous robots operating in unknown and unstructured environments. Reliability is achieved by continuously controlling the uncertainty associated with knowledge of the environment and of the robot’s position and orientation. In the context of the project, non-intrusiveness means that the robot must be able to operate without special purpose landmarks being added to its environment. Non-intrusive operation in unstructured environments precludes navigation with respect to a set of handcrafted “landmarks,” and requires the robot’s ability to infer its position from learned natural landmarks of the environment using its perception system. Instead of passive reconstruction of the working space, perception is faced as a process of selectively extracting from the world the information needed to accomplish a given task, trading generality for specificity and gaining in simplicity and robustness. It is no longer a separate off-line module, but an integral part of the closed loop control system. This coupling will be explicitly addressed at the control level by assessing the compatibility of the current state of the robot’s knowledge of its environment, its mission and safety requirements. Availability of such systems has a considerable impact in many economic, social and industrial activities such as control of marine pollution, surveillance of restricted areas, surveillance of equipment, agriculture, underwater cartography and marine biology studies, to mention but a few.

SMART II — Semi Autonomous Monitoring and Robotics

Funding: European Commission. Reference: FP4-TMR Network-FMRX-CT96-0052. Dates: 1997 – 2000. Website: n/a

The SMART network as founded in 1993. The network brings together 13 leading European laboratories and small enterprises working in the areas of computer vision and mobile robotics for surveillance and monitoring. The objective of the SMART network is to develop technologies in the areas of surveillance and monitoring. The SMART II workplan is composed of activities in the areas of “Video Surveillance and Monitoring”, “Mobility for Surveillance and Monitoring” and “Applications of Techniques for Surveillance and Monitoring”. The tasks in these workpackages are designed to build on and complement existing research activities through the exchange of researchers and collaborative research.

VIRTUOUS — Autonomous Acquisition of Virtual Reality Models from Real World Scenes

Funding: European Commission. Reference: FP4-IP-960174. Dates: Jan. 1997 – Dec. 1999. Website: n/a. Partners: Centre for Vision,Speech and Signal Processing (University of Surrey, UK-coordinator), Institute of Control Theory and Robotics (Slovak Academy of Sciences, Slovakia), Institute of Information Theory and Automation (Czech Academy of Sciences, Czech Republic).

vispan

As virtual reality (VR) systems improve in performance attention is turning towards the content of virtual worlds and what can be done within virtual worlds. To make virtual worlds interesting detailed scene models must be built. The models need to contain 3D shapes as complex as those that we are familiar with from our everyday experience. Shape alone is not enough, the real world has an infinite variety of colours and textures which also need to be included in virtual worlds.

Developing these models using 3D modelling software becomes more time consuming as the scene complexity increases. The objective of this project is to capture virtual reality (VR) models of real world scenes and then to use these models. Within this project we will examine two sensor systems to acquire VR models. The acquisition of single view range images using structured light projectors is approaching commercial maturity. The University of Surrey will investigate the use of multiple views of range images combined with colour images to produce VR models.

The Instituto Superior Técnico / Instituto de Sistemas e Robótica will use computer vision techniques to acquire scene models from video sequences taken from a mobile platform. Although the objective is the same as the University of Surrey, this is a more ambitious sensor configuration. It has lower cost but will need more powerful software.

For a virtual world to have a satisfactory look and feel it must have lifelike colour and texture. Not only do these provide useful cues to a subject navigating in such an environment, but they also aid in the accurate detailed reconstruction of the environment. The Institute of Information Theory and Automation will address texture segmentation and synthesis. Their objectives will include reducing the space and time overheads of texture in VR systems. Virtual reality systems can be used for a variety of applications in entertainment, medicine and manufacturing. It follows that producing detailed models is of generic interest. Within this project we will address one particular application. The main application to be developed is a Virtual Reality Robot Arm Trainer, and will be done by the Institute of Control Theory and Robotics (ICTR) . This will also provide a mechanism to validate the scene models.

VIRSBS — Visual Intelligent Recognition for Secure Banking Services

Funding: European Commission. Reference: FP5-Esprit LTR-21894. Dates: 1996 – 1999. Website: n/a. Partners: ISR-DIST-Università di Genova (coordinator-I), École Polytechnique Féderale de Lausanne (CH), Maynooth College (IE).

virsbs

The goal of this Reactive LTR project is to realize a prototype autonomous station for personal identification. This station will include all the features required to be integrated in to a new generation of automated security check-point along corridors,passageways or access doors, and in the next-generation of automatic teller machines. This prototype will be used to perform a significant set of statistical test on personal identification.Secure access control is a key issue in banking services. Magnetic cards and personal identification numbers, currently adopted for accessing automatic tellers, do not provide a sufficient degree of security and are likely a source of unauthorized operations. As far as the access to restricted areas is concerned, it usually requires direct surveillance by guards or indirect surveillance by a human operator through a monitoring system. Even in this case it is often difficult, due to fatigue or other distracting factors, to guarantee continuous and high performance in this task.

The project is mainly focused on banking services, in particular for secure and safe control of access to key areas in the bank building and for cross-checking personal identity of people requesting banking transactions. A system based on visual recognition will have a major impact on man machine interaction, providing a more natural way for the customer to interact with the banking security system. The project will also have a potential impact in various scenarios ranging from generic surveillance in buildings and parking areas to security control through check points in airports or railway stations.

Our approach will be to exploit newly-developed advanced techniques based on computer vision and robotics. Iconic and feature-based techniques will be utilized in the first instance and in case of ambiguities performances will be improved by using stereo analysis and the general theory of projective invariants. A breakthrough with respect to current technology will be given by the use of space-variant image representation and by the realization of an active robotic system, able to fixate and track the examined subject.