Skip to main content

A vision-based framework for the discovery-driven manipulation of non-rigid objects

Buy Article:

$63.00 plus tax (Refund Policy)

Since the analytical expressions for the representation of non-rigid object structure and motion are severely underconstrained, current techniques for non-rigid object manipulation employ physical object models known prior to sensing. Recently, however, psychophysical studies have revealed that humans are able to discover proper motor coordination skills through sensory input without the use of previously known physical models. In this paper, a robust, discovery-driven, vision-based framework for the robotic manipulation of non-rigid objects is developed and experimentally verified using various flexible linear objects. Employing the novel concept of relative elasticity, the algorithms derived using this framework are completely sensor-based, requiring the use of no a priori explicit, physics-based models.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Data/Media
No Metrics

Document Type: Research Article

Affiliations: 1: Department of Electrical Engineering, University of Virginia, Charlottesville, VA 22903-2442, USA 2: Electroglas, Inc., 2901 Coronado Drive, Santa Clara, CA 95054, USA

Publication date: 1996-01-01

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more