Clicky
Tech Post

Cameras that can learn & understand what they are seeing


Published : 04 Jan 2021 08:04 PM

Intelligent cameras could be one step closer thanks to a research collaboration between the Universities of Bristol and Manchester who have developed cameras that can learn and understand what they are seeing.

Roboticists and artificial intelligence (AI) researchers know there is a problem in how current systems sense and process the world. Currently they are still combining sensors, like digital cameras that are designed for recording images, with computing devices like graphics processing units (GPUs) designed to accelerate graphics for video games.

This means AI systems perceive the world only after recording and transmitting visual information between sensors and processors. But many things that can be seen are often irrelevant for the task at hand, such as the detail of leaves on roadside trees as an autonomous car passes by. 

However, at the moment all this information is captured by sensors in meticulous detail and sent clogging the system with irrelevant data, consuming power and taking processing time. A different approach is necessary to enable efficient vision for intelligent machines.

Two papers from the Bristol and Manchester collaboration have shown how sensing and learning can be combined to create novel cameras for AI systems.

Walterio Mayol-Cuevas, Professor in Robotics, Computer Vision and Mobile Systems at the University of Bristol and principal investigator (PI), commented: “To create efficient perceptual systems we need to push the boundaries beyond the ways we have been following so far.

“We can borrow inspiration from the way natural systems process the visual world — we do not perceive everything — our eyes and our brains work together to make sense of the world and in some cases, the eyes themselves do processing to help the brain reduce what is not relevant.”