Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments



Ver los resultados en:
https://tapipedia.org/sites/default/files/towards_active_robotic_vision_in_agriculture.pdf
DOI: 
https://doi.org/10.1016/j.ifacol.2019.12.508
Proveedor: 
Licencia de recurso: 
Derechos sujetos al permiso del propietario
Tipo: 
documento de conferencia
Autor (es): 
Zapotezny-Anderson P.
Lehnert C.
Editor (es): 
Descripción: 

3D Move To See (3DMTS) is a mutli-perspective visual servoing method for unstructured and occluded environments, like that encountered in robotic crop harvesting. This paper presents a deep learning method, Deep-3DMTS for creating a single-perspective approach for 3DMTS through the use of a Convolutional Neural Network (CNN). The novel method is developed and validated via simulation against the standard 3DMTS approach. The Deep-3DMTS approach is shown to have performance equivalent to the standard 3DMTS baseline in guiding the end effector of a robotic arm to improve the view of occluded fruit (sweet peppers): end effector final position within 11.4 mm of the baseline; and an increase in fruit size in the image by a factor of 17.8 compared to the baseline of 16.8 (avg.).

Año de publicación: 
2019
Palabras clave: 
agriculture
Robotics
Visual servoing
Computer vision
Robot control
Deep learning
Convolutional neural networks