This paper presents an autonomous robot-to-robot object handover in the presence of uncertainties and in the absence of explicit communication. Both the giver and receiver robots are equipped with an eye-in-hand depth camera. The object to handle is roughly positioned in the field of view of the giver robot's camera and a deep learning based approach is adopted for detecting the object. The physical exchange is performed by recurring to an estimate of the contact forces and an impedance control, which allows the receiver robot to perceive the presence of the object and the giver one to recognize that the handover is complete. Experimental results, conducted on a couple of collaborative 7 DoF manipulators in a partially structured environment, demonstrate the effectiveness of the proposed approach.

Vision based robot-to-robot object handover

Bloisi D. D.;Pierri F.
2021-01-01

Abstract

This paper presents an autonomous robot-to-robot object handover in the presence of uncertainties and in the absence of explicit communication. Both the giver and receiver robots are equipped with an eye-in-hand depth camera. The object to handle is roughly positioned in the field of view of the giver robot's camera and a deep learning based approach is adopted for detecting the object. The physical exchange is performed by recurring to an estimate of the contact forces and an impedance control, which allows the receiver robot to perceive the presence of the object and the giver one to recognize that the handover is complete. Experimental results, conducted on a couple of collaborative 7 DoF manipulators in a partially structured environment, demonstrate the effectiveness of the proposed approach.
2021
978-1-6654-3684-7
File in questo prodotto:
File Dimensione Formato  
Vision_based_robot-to-robot_object_handover.pdf

accesso aperto

Tipologia: Pdf editoriale
Licenza: Versione editoriale
Dimensione 657.42 kB
Formato Adobe PDF
657.42 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11563/169199
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
social impact