Wednesday, January 26, 2011

Untethered Gesture-based Robotic Control

Untethered gesture-based robotic control systems fall into two general categories; 1) autonomous recognition of human gesture leading to predefined robotic  behavior and, 2) wireless, sensor-based robotic control. In the first case, autonomous robotic recognition of gestures utilize optical sensors incorporated in the robot exoskeleton to detect motion. A detected motion, such as a waving hand, is compared with a predefined set of recognizable motions “programmed” into the robot, which initiates a corresponding robot behavior. In the second case, motion sensors worn by a human “controller” detect motion and wirelessly relay data to the robot to control its actions. In short, a human controller waving their hand results in the robot waving its hand. The two main distinctions between these cases is the location of the sensors and the nature of the robot control.  I would use the classic Delphi method to identify technologies required to develop an untethered gesture-based robotic control system.
Two impediments to development of an untethered gesture-based control system might include financial and technical limitations. Development of an untethered gesture-based robotic control system would require a large financial commitment, typically beyond the means of most research organizations. Moreover, collaboration from a vast array of research and development organization specializing in engineering, robotics, networks, program management would be required. Technical limitations might include battery life, robot weight and size, wireless network bandwidth, and  manufacturing and production methods.  Political support for an untethered gesture-based control system might include entities that routinely perform dangerous tasks, such as the military, search and rescue, mining operations, and fire fighters. Likewise, economic forces will spur research and development of supporting technologies and drive down the cost.

Saturday, January 22, 2011

Henry Markham's Blue Brain Project

Henry Markham presents his research team’s effort to create a computer model of the human brain using a supercomputer (Blue Brain). By employing mathematical and graphical simulation, his team seeks to model the function of 10,000 brain neurons and to build a 3-dimensional model of their structure and interaction. Three main motivations drive his research. The first is to provide a facility for researching brain function that doesn’t involve animal testing. Secondly, to embody sufficient knowledge of brain function in a model from which an understanding of human social dynamics can be gained. Lastly, to facilitate an understanding of the mental disorders that effect two billion people and the drug/brain interaction of drugs used to treat these disorders. Currently, the drugs used to treat mental disorders are largely developed from empirical evidence. That is to say, brain/drug interaction is not understood. We know that the drug alleviates the symptoms of a particular disorder, but not how this is accomplished. Moreover, not all disorders have an identified drug treatment. By modeling brain function and brain/drug interaction, more treatments may be identified.
The implications of treating the mental disorders that effect 2 billion people are far reaching. While it is difficult to precisely measure a person’s mood, feelings, emotions, etc., the quality of life of those suffering from a mental disorder would be greatly improved by eliminating the debilitating of effects of such disorders. In short, their “happiness quotient” could be expected to increase. Research has shown that with increased mental and emotional comfort comes increased productivity, which leads to an improved standard of living for a society. Societies with a higher standard of living tend to engage in fewer conflicts and adopt political means to resolve difficult issues.
http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets.html

Tuesday, January 11, 2011

Bayesian Networks and Gesture-Based Human-Machine Interfaces

The 2010 Horizon Report highlights gesture-based computing as an upcoming technology with a five-year horizon.  The main goal of gesture-based computing is to provide an untethered human-machine interface (HMI) capable of capturing  and interpreting the myriad of human motion and emotion. Current gesture-recognition systems typically employ  a single
“camera” or sensor comprised of a CMOS IC (Complementary metal–oxide–semiconductor Integrated Circuit) to detect motion, such as a pointing finger or a facial gesture, such as a smile. The size (in megabytes) of readily available CMOS sensors  provides a data stream sufficient for numerous commercial uses and can be found in electronic games, automobiles, and  industrial applications.

While CMOS manufacturing technology has advanced sufficiently to make widespread CMOS sensor utilization economically feasible,  interpreting the motion or gestures they capture requires advanced cognitive abilities to infer their meaning and anticipate future situations. Just as we have learned the meaning of a goodbye wave and to anticipate departure  of the gesturer, Bayesian probabilistic modeling or Bayesian Networks (BNs) provide a wide range of techniques ideally suited  to learning the states of a gesture, to infer their meaning, and to predict an outcome of the gesture. Unlike classic  probabilistic methodologies, BNs provide a predictive capability based on updated information. As a motion is initiated and  proceeds to completion, BN techniques are able to incorporate new information to exclude unlikely choices to arrive at the  meaning of motion or gesture.