The Waggle dance assisted computer vision decoding tool

The Waggle dance assisted computer vision decoding tool

 

EN

This phase involves the detection and localisation of Waggle Dances using Computer Vision algorithms. A 4K camera will be pointing directly to the hive and recording the bee activity throughout the day. The video stream will be analysed by an algorithm that automatically detects the waggling bees and outputs their spatio-temporal locations in the form of metadata that will then be stored on a database. Each metadatum will be composed of four entries, namely identity, horizontal and vertical coordinates with respect to the image plane, and frame number. The identity will uniquely identify the Waggle Dance of a bee. Because the Waggle Dance is an activity that last for a few seconds, the typical metadata encoding the Waggle Dance of a bee will be a series of multiple spatial coordinates over the video frames with the same and unique identity.

In literature, Waggle Dance behaviours were analysed using video detection and tracking algorithms as pre-processing, and, on the top of these, ad-hoc algorithms for Waggle Dance detections were built to analyse dance behaviours [1-3]. Authors in [1] proposed a method to track single bees at a time via manual initialisation and to encode the bee behaviours by using Markov models. Unfortunately, the manual initialisation is cumbersome when a large amount of data has to be analysed. In [2] multiple bees were detected and tracked, and the duration and direction of waggling bees were estimated. In this case accurate detection and tracking were possible (and reliable) as bees had markers. Marking bees is a challenging operation as it needs special personnel that takes each bee, removes the hair from the thorax and positions the marker. Then in [3] a fully automated pipeline for (markerless) detection and tracking of multiple bees was proposed. This algorithm works in the case of low-resolution videos and with about 20 bees simultaneously in each frame. However the state of each bee was characterised only by her position and frame number. The bee orientation, that is an important feature to determine the direction of the waggle dance, was not estimated.

In this project we aim to generalise the algorithm for the localisation of the Waggle Dance by reducing the use of pre-processing, such as detection and tracking of bees, and by focusing on the key motion features that describe the dance itself. We aim to develop an algorithm that will minimize the false negative rate of dance detection while maintaining an acceptable level of false positive rate. Human supervision will then be used to clean up spurious detections (false alarms) in order to create a database of de-noised and accurate Waggle Dance samples. On the one hand, these samples will be used to manually measure the dance features (e.g. duration and direction). On the other hand, the same samples will be used as training data for the subsequent version of this analysis tool, which will include a machine learning-based algorithm to automatically decode the dance features.

The latter phase will lead to the core of our vision-based automated Waggle Dance detection and analysis system. Based on the annotations we will be creating from the data collection, we aim to develop a regression model that will output the dance features of interest (i.e. direction and duration) in a totally automated manner. Because of their effectiveness and flexibility to visual-recognition problems, we will explore DeepNet-based models for learning and inference of these features. In the case of a DeepNet-based approach we will devise an architecture that, given as input the video stream, will provide us with the locations of the most likely dancing bees, along with their dancing features, for each frame. This detections will then be temporally denoised via a tracking method in order to provide consistent measurements.

 

 

References

[1] A. Veeraraghavan, R. Chellappa and M. Srinivasan, “Shape-and-Behavior-Encoded Tracking

of Bee Dances,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 463-476, Mar. 2008.

[2] F. Wario, B. Wild, M.J. Couvillon, R. Rojas and T. Landgraf, “Automatic methods for long-term tracking and the detection and decoding of communication dances in honeybees,” Frontiers in Ecology and Evolution, Sep. 2015.

[3] F. Poiesi and A. Cavallaro, “Tracking multiple high-density homogeneous targets,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 25, no. 4, pp. 623-637, Apr 2015