The general topic of the Research Group for Active Vision is to acquire sensor data and act according to its interpretation. This leads to a closed loop of sensing and acting. A typical application of this idea can be found in autonomous robots that interpret their environment, choose from alternative strategies to solve their task, and then move the robot or its grippers. Such devices can be used in autonomous transportation or to perform service tasks.
The underlying technology is a key component for the factory of the future (Industry 4.0) as well as for autonomous cars.
Typical sensors are cameras, range sensors, laser scanners, and microphones. In addition, GPS data, velocity measurements, position measurements (e.g. IMU), etc. can be fused into the sensor data. We typically combine various sensors to increase the reliability of the result and to be tolerant to inaccuracies or errors.
The systems need to know their sensors in detail, which is covered by calibration procedures. The systems need to make assumptions on the world which are deduced from some model of the environment. As the environment normally changes and is also modified by actions, the models need to be adaptable and should be acquired at least with computer support to minimize human effort. Machine learning mechanisms are used to solve this task.
The interpretation of sensor data possibly with explicit knowledge on the environment is the key technology to operate autonomous systems. These interpretation strategies are also required in medical applications where anatomical and medical knowledge is used to support segmentation of medical images or to assist with diagnosis.
Computer vision and image processing techniques as well as color image processing are of general interest not only in the context of autonomous systems.
The following topics provide details of the general ideas above.
Relevant publications of the team members are listed for each item.