Skip to main content
padlock icon - secure page this page is secure

Open Access A study of the adaptive gesture interface for the severely physically handicapped

Download Article:
 Download
(PDF 838.5 kb)
 
Technology has already been developed that does not require a person to physically touch a mouse or a screen. This technology, such as the Kinect, relies on motion sensors to pick up movements and gestures, using them as inputs. However, these gestures are pre-programmed and actually require the same amount of coordination as any other input through a physical device. This means that they are not immediately useful for those with motor dysfunction. Dr. Ikushi Yoda of the National Institute of Advanced Industrial Science and Technology (AIST), Japan, is working to resolve this problem and aims to provide motion sensors that can allow disabled people to smoothly access computers amongst other devices. Yoda and his interdisciplinary team are working to analyse and categorise a wide variety of gestures from disabled with motor dysfunction. Their aim is to programme these gestures to be recognised by motion-sensing devices as well as to introduce flexibility into the devices to allow for a wider range of detection.

In order to devise new systems of input for users with various motor disabilities, Yoda and his team first had to gather data on the sorts of gestures they could create. To document these accurately, Yoda's team monitored a pool of disabled volunteers to collect movement data for classification and analysis. Yoda chose participants that represented a wide range of potential disabilities. This meant many gestures of different types and from different parts of the body could be acquired and analysed. These data were classified according to body part and used to develop a modulised gesture recognition engine.

In addition to gathering important data, Yoda is also trying to optimise the research process using relatively cheap motion detection cameras. It is essential that any system eventually employed is at least accessible to the majority of the target disabled users. Yoda and his team have previously developed an interface based on head gestures for individuals with severe cerebral palsy to circumvent their inability to operate their motorised wheelchairs.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Keywords: CEREBRAL PALSY; GESTURES; HEAD GESTURES; KINECT; MODULISED GESTURE RECOGNITION ENGINE; MOTION DETECTION CAMERAS; MOTION SENSORS; MOTION-SENSING DEVICES; MOTOR DISABILITIES; MOTOR DYSFUNCTION; MOTORISED WHEELCHAIRS; MOVEMENT DATA; PHYSICAL DEVICE; WIDER RANGE OF DETECTION

Document Type: Research Article

Publication date: June 1, 2018

More about this publication?
  • Impact is a series of high-quality, open access and free to access science reports designed to enable the dissemination of research impact to key stakeholders. Communicating the impact and relevance of research projects across a large number of subjects in a content format that is easily accessible by an academic and stakeholder audience. The publication features content from the world's leading research councils, policy groups, universities and research projects. Impact is published under a CC-BY Creative Commons licence.

  • Subscribe to this Title
  • Terms & Conditions
  • Disseminating research in Impact
  • Information about Impact
  • Ingenta Connect is not responsible for the content or availability of external websites
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more