In search and rescue operations, very often traditional thermal cameras are unable to record the temperature due to the presence of objects that obstruct the view. Imagine, instead, if there was the possibility of being able to use a drone capable of being able to see even behind obstacles. This is exactly what a team of researchers from Johannes Kepler University in Linz did, creating the first prototype of a completely autonomous drone capable of seeing even behind obstacles .
The need for a vision system capable of seeing between obstacles
Most of the techniques currently used in rescue operations, in fact, use helicopters with personnel on board. In the best of cases, remotely piloted drones equipped with thermal cameras capable of detecting the presence of living beings are used. It is very common in these circumstances to work in extreme conditions due to earthquakes or collapses caused by gas leaks (such as that of the Miami condominium ). The ability to use a drone that can see behind obstacles could be a valid solution to these problems. We think of a forest, where it is practically impossible to see behind the crowns of the trees. The Johannes Kepler University drone, on the other hand, manages to find people and animals even in the most adverse situations.
The Synthetic Aperture technique that allows the drone to see behind obstacles
What gives the drone, developed by the group led by prof. Oliver Bimber, the ability to be able to “see” behind obstacles is Airbone Optical Sectioning (AOS) . This approach is based, in turn, on the Synthetic Aperture (SA) technique, ie the synthetic opening. Thanks to this technique it is possible to obtain images with a high depth of field, as if they came from cameras with a very large aperture (of the order of meters, which would be unusable in applications of this type). When using lenses with such a large aperture, in fact, it is possible to obtain an extremely low depth of field, thanks to which the points that are not in focus are strongly blurred . The narrower apertures (of the order of millimeters) of traditional cameras, on the other hand, increase the depth of field and this makes it possible to greatly emphasize all the occlusion volume in the captured images.
The "fake" large aperture diaphragm
Data from multiple small aperture sensors (or a single moving small aperture sensor) is combined together. This allows them to approximate their behavior to that of a wide aperture sensor, in order to improve their resolution, depth of field, frame rate, contrast and signal-to-noise ratio . This approximation is based on the unstructured light field theory, which represents the image pixels as 4D light rays in a 3D volume. As seen in the image above, the origins of these rays represent the position of the respective cameras, while their directions are determined both by the poses of the aircraft that make up the drone and by the intrinsic parameters of the cameras themselves.
An Artificial Intelligence recognizes people
Once the acquired image is correctly reconstructed and cleared of obstacles, a Computer Vision (CV) algorithm processes the thermal images to recognize the presence of humans, animals or other sources of heat. If the drone receives a possible (even weak) match from the algorithm, then it automatically approaches to re-sample the region with better accuracy . At this point, it uses both its own position sensors and those coming from the AOS and thermal chambers to reconstruct the exact spot where the potential victim to be rescued is and sends the coordinates to the rescue team.
The “magical” drone, able to see behind obstacles, promises itself as a valid solution in search and rescue situations. Out of 17 experiments conducted in three different types of forests, in several months of the year and at different times of the day, the drone was able to identify 38 people out of 42, with an average accuracy of 86% .
Our drone allows search and rescue operations (SAR – Search And Rescue, ed) in remote areas without a stable network coverage, because it transmits to the rescue team only the results of the classification that indicate the detections and can therefore operate with connections to intermittent narrow band (for example, via satellite). Once received, these results can also be visually enhanced for better interpretation on remote mobile devices .
prof. Oliver Bimber
So prof. Bimber commented on the results obtained in the test phase, paving the way for the use of a truly innovative technology that can make it possible – in the not too distant future – to make search and rescue operations much more reliable, faster and safer in situations. of emergency.