Algorithms can do some things better and, above all, much faster than humans – for example, recognise patterns in unstructured data in the shortest possible time. In other things, however, humans are still superior to machines – for example, when flying drones through rough terrain. But the known flying objects of Swiss professor Davide Scaramuzza have what it takes to top even these capabilities – and thus make an important contribution in disaster situations, for example.area near, no: above the city.
Davide Scaramuzza works intensively and successfully on drones – more precisely: on “autonomous visual controlled flying micro robots”. Davide is a Professor of Robotics and Perception at the University of Zurich, where he does research at the intersection of robotics, computer vision, and machine learning, using standard cameras and event cameras, and aims to enable autonomous, agile navigation of micro drones in search and rescue applications.
For his research contributions, he won prestigious awards, such as a European Research Council (ERC) Consolidator Grant, the IEEE Robotics and Automation Society Early Career Award, a Google Research Award, and two Qualcomm Innovation Fellowships.
Known flying objects with previously unknown skills
Davide will be a guest at CIOmove in Switzerland on Sunday when we visit Zurich Airport. He will then give us an introduction to the topic “Autonomous Drones in Disaster Zones” and explain how deep learning with optical control enables his drones to act autonomously.
The drones that Scaramuzza has been researching for years with his teams at the University of Zurich fly completely autonomously and without human assistance. They use neither GPS nor radar to orient themselves, but only a visual system supported by cameras that calculates a collision-free course through unknown terrain in fractions of a second – and at speeds of up to 40 kilometres per hour.
State-of-the-art methods generally separate the navigation problem into subtasks: sensing, mapping, and planning. While this approach has proven successful at low speeds, the separation it builds upon can be problematic for high-speed navigation in cluttered environments. Indeed, the subtasks are executed sequentially, leading to increased processing latency and compounding of errors through the pipeline. Here we propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds, with purely onboard sensing and computation. The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion. This direct mapping drastically reduces processing latency and increases robustness to noisy and incomplete perception.
Drone deployment in civil protection
With these skills, the autonomous mini-drones can explore forests, buildings, ruins or trains, for example, without colliding with the numerous obstacles that can get in their way. For example, the drones can be used in disaster situations to find and rescue injured people. They can help with environment monitoring in inaccessible areas to collect and document environmental data and conditions. And, much more mundane: remote inspections in inaccessible places and in dangerous environments can be carried out safely with the mini-drones.
Autonomous drones are suitable wherever fast reactions are necessary
Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. High speed is particularly important: since drone battery life is usually limited to 20-30 minutes, drones need to fly faster to cover longer distances. However, to do so, they need faster sensors and algorithms. Human pilots take years to learn the skills to navigate drones. What does it take to make drones navigate as good or even better than human pilots? Autonomous, agile navigation through unknown, GPS-denied environments poses several challenges for robotics research in terms of perception, planning, learning, and control.
In his talk at CIOmove, Davide will show us how the combination of both model-based and machine learning methods united with the power of new, low-latency sensors, such as event cameras, can allow drones to achieve unprecedented speed and robustness by relying solely on onboard computing. We are very much looking forward to it!