Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments



View results in:
https://tapipedia.org/sites/default/files/towards_active_robotic_vision_in_agriculture.pdf
DOI: 
https://doi.org/10.1016/j.ifacol.2019.12.508
Provider: 
Licensing of resource: 
Rights subject to owner's permission
Type: 
conference paper
Author(s): 
Zapotezny-Anderson P.
Lehnert C.
Publisher(s): 
Description: 

3D Move To See (3DMTS) is a mutli-perspective visual servoing method for unstructured and occluded environments, like that encountered in robotic crop harvesting. This paper presents a deep learning method, Deep-3DMTS for creating a single-perspective approach for 3DMTS through the use of a Convolutional Neural Network (CNN). The novel method is developed and validated via simulation against the standard 3DMTS approach. The Deep-3DMTS approach is shown to have performance equivalent to the standard 3DMTS baseline in guiding the end effector of a robotic arm to improve the view of occluded fruit (sweet peppers): end effector final position within 11.4 mm of the baseline; and an increase in fruit size in the image by a factor of 17.8 compared to the baseline of 16.8 (avg.).

Publication year: 
2019
Keywords: 
agriculture
Robotics
Visual servoing
Computer vision
Robot control
Deep learning
Convolutional neural networks