Wednesday, January 26, 2011

Untethered Gesture-based Robotic Control

Untethered gesture-based robotic control systems fall into two general categories; 1) autonomous recognition of human gesture leading to predefined robotic  behavior and, 2) wireless, sensor-based robotic control. In the first case, autonomous robotic recognition of gestures utilize optical sensors incorporated in the robot exoskeleton to detect motion. A detected motion, such as a waving hand, is compared with a predefined set of recognizable motions “programmed” into the robot, which initiates a corresponding robot behavior. In the second case, motion sensors worn by a human “controller” detect motion and wirelessly relay data to the robot to control its actions. In short, a human controller waving their hand results in the robot waving its hand. The two main distinctions between these cases is the location of the sensors and the nature of the robot control.  I would use the classic Delphi method to identify technologies required to develop an untethered gesture-based robotic control system.
Two impediments to development of an untethered gesture-based control system might include financial and technical limitations. Development of an untethered gesture-based robotic control system would require a large financial commitment, typically beyond the means of most research organizations. Moreover, collaboration from a vast array of research and development organization specializing in engineering, robotics, networks, program management would be required. Technical limitations might include battery life, robot weight and size, wireless network bandwidth, and  manufacturing and production methods.  Political support for an untethered gesture-based control system might include entities that routinely perform dangerous tasks, such as the military, search and rescue, mining operations, and fire fighters. Likewise, economic forces will spur research and development of supporting technologies and drive down the cost.

No comments:

Post a Comment