Efficient robot navigation inspired by honeybee learning flights | Nature
Download PDF
Subjects
- Aerospace engineering
- Animal behaviour
Abstract
Navigation is a crucial capability for both animals and robots. Although tiny flying insects can robustly navigate over long distances1, state-of-the-art robot navigation methods are computationally expensive and therefore restricted to large robots2,3. Here we propose ‘Bee-Nav’, a highly efficient navigation strategy inspired by the visual learning flights of honeybees4,5,6. In equivalent robotic learning flights, a tiny neural network is trained to map omnidirectional images to a home vector based on path integration. After learning, the robot can fly far away from home, come straight back using path integration and cancel integration drift using the visual homing network. Simulations showed that, for realistic path integration accuracies, the neural network requires training on only approximately 0.25–10.00% of the total flight area. In real-world indoor and outdoor experiments, a small drone successfully returned to within 0.5 m of home for 100% of 30–110-m flights and 70% of 200–600-m flights in windy conditions, using 3.4-kB and 42-kB neural networks, respectively. The proposed navigation strategy will be vital for resource-constrained robots that perform tasks while travelling from and to a home location. Furthermore, it provides new perspectives on the neuroethology of insect navigation, from how visual learning shapes homing trajectories to the nature of cognitive maps.
Main
Small robots are at present deprived of the autonomous navigation capabilities necessary for real-world applications. Resource-restricted robots, such as lightweight flying drones7,8, can simply not carry or power the required computational systems for high-precision, map-based autonomous navigation2,3. Despite efforts towards improved computational efficiency, navigation based on detailed metric maps still requires a high-end laptop9 or a GPU-enabled embedded computer10. Efficiency can be improved by sacrificing map accuracy, storing it as a topological graph with nodes as places and edges as paths11,12. However, the robot still needs to recognize where it is and adjust the map accordingly, leading to increased computational requirements for larger trajectories11,13. This limits the navigation range of the most efficient map-based robot navigation methods. The state of the art is a tiny flying robot that uses 500 kB of memory on a low-power AI chip for navigating in a 4 × 5-m area14.
Nature shows that extremely resource-efficient, long-range navigation is possible. Small insects such as honeybees robustly navigate up to several kilometres from their hive1. Their impressive navigation capabilities rely on two components15. The first is path integration16, which allows insects to estimate their position with respect to a starting point by integrating the directions and distances travelled. Because path integration is subject to increasing drift, insects also rely on a second component called view memory, which is the act of recalling visual landmarks and their relation to places of interest17. Path integration is well understood by now, even to the neuronal level18. By contrast, the precise working of view memory and its interplay with path integration is less clear.
Lured by the navigational feats of insects, roboticists have proposed various insect-inspired navigation strategies. The predominant strategy is route-following, which typically relies on view memory to retrace the outbound trajectory during the return journey19,20,21,22,23,24. Route-following is a suitable strategy for navigating in highly cluttered environments, but in open areas, it can make the return journey unnecessarily long. Indeed, insects such as honeybees and desert ants tend to return home with a new straight path, even after long tortuous outbound journeys25,26 (Fig. 1a). During the return journey, insects rely initially on path integration and then increasingly on view memory when nearing their home26,27,28.
Fig. 1: Illustration of the proposed robot navigation strategy, Bee-Nav, inspired by honeybee learning and foraging flights.The alternative text for this image may have been generated using AI.Full size image
a, Before foraging, honeybees first perform ‘learning’ flights (dark-grey line) close to home (star). Subsequently, they can fly out far away from home (teal line) and come back in an almost straight line (orange and red lines). Scale bar, 100 m. b, In Bee-Nav, the robot also first performs a learning flight, capturing omnidirectional images while using path integration for maintaining a vector (orange arrows) pointing to the home location. A neural network is trained to map the images to the home vectors. The trained network encodes an implicit view memory within the learned homing area (LHA) enclosing the learning flight trajectory (dashed circle). c, After learning, the robot can execute a long outbound flight to perform a task of interest (teal line), whil