Date: 2020 – 2023
Budget: 467 400 PLN
Project Manager: Dominik Belter
Funder: NCN
The next step in the development of perception systems for robots and the goal of this project is to inference about the properties and meaning of objects in the environment. An example of a scenario is the use of neural networks to estimate the properties of objects such as doors, drawers, switches, etc. In this scenario, a robot that uses a two-dimensional representation of the environment (RGB image) and a depth image should conclude on the potential motion of the articulated objects, its kinematic limitations, and the state. Another example is the reconstruction of 3D objects. Robots, unlike humans, are not able to reconstruct 3D objects using a single RGB-D image. A modern robot perception system should reconstruct surfaces that are invisible because they are occluded by other objects or are on the unseen side of the object. Such properties of perception systems can be obtained by using artificial neural networks and aggregating knowledge about the robot environment and object properties with the application of learning mechanisms. With such a perception system, the robots will be able to better plan their motion and interaction with the environment based on individual images of the environment, without the need for time-consuming scanning and building a model of the environment. The aim of the project is also to find an answer to the question of how robots should represent the environment and how robots “understand” the meaning of surrounding objects, their shape, and kinematic relations.