Struck-by threats are a leading cause of injury in unstructured, dangerous environments. Human operators must perform agile, dynamic responses in order to escape any risk of injury from oncoming threats (threat-evasion). This motion corresponds to escaping a threat by selecting a direction of travel from a 360 degree range. Threat-evasion introduces more dimensionality in human motion than other kinds of locomotion. Smart robotics, specifically wearable robots, show great promise in augmenting performance by leveraging the agility of the human operators, while maintaining their safety in the environment.
Existing lower limb wearable robots focus on either sagittal or frontal plane assistance for steady state locomotion or specific tasks. There is a lack of understanding on how to apply assistance through a wearable device for more dynamic, nonlinear motions, such as the threat-evasion. This thesis addresses a variety of techniques to model this highly-dimensional motion using human intent in order to inform the development of potential assistance models. A human-subjects experiment was conducted to collect an array of sensor information of a subject performing threat-evasion in 8 directions of travel.
While analytical methods are common for applying control through a lower limb wearable robots in the sagittal and frontal planes, machine learning techniques offer great potential in quantifying the intent of human motion of threat-evasion, specifically intent recognition systems. The first aim of this work is to design, optimize, and validate an intent recognition system that predicts the start of threat-evasive movement and estimates an operator's direction of travel. This work leads to understanding how to customize assistance for threat-evasion. Utilizing center of mass kinematics and joint-level biomechanics, the second aim of this work reduces the dimensionality of threat-evasion from the first aim by using human motion primitives.