Robust hand-eye coordination
Abstract:Industrial cameras are coupled into robotic systems to increase the flexibility of the robots. Such hand-eye coordination usually requires calibration of the cameras to compute the three-dimensional (3D) coordinates of positions in the robot space. This paper describes a new hand-eye coordination approach which does not require camera calibration. Instead, we propose the use of relative stereo disparity to compute the relative depth between the perceived objects. Incorporating the relative depth measure into the image space converts the hand-eye coordination problem into a linear transformation between the pseudo-3D image space and the 3D robot space. Moreover, the transformation matrix involved is square and can be easily estimated and updated by using visual feedback. The proposed method is fast and simple, making it feasible for real-time visual feedback implementation. Furthermore, since no calibration is required, the method proposed is robust to substantial changes in the hand-eye system configuration. Experiments are conducted to verify the accuracy and robustness of the proposed method. The main contributions of this paper are: (i) introducing a stereo attribute that measures relative depth, (ii) formulating a pseudo-3D image space, (iii) relating the pseudo-3D image space to the robot space for computing the hand-eye transformation and (iv) realizing a robust hand-eye coordination system which incorporates visual feedback control.
Document Type: Research Article
Affiliations: School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Publication date: 1996-01-01