SUMMARY
Lower-limb exoskeleton technologies—rigid or soft devices that provide assistance to users—show promise in restoring and augmenting human movement. However, current state-of-the-art exoskeleton control primarily addresses consistent, time-repeatable tasks and device-specific, state-machine-based transitions that stand in stark contrast with the fluidity and variability of natural human movement. As I demonstrate in this work, even at its theoretical best, the current control paradigm cannot handle the uncertain and ever-changing environment we live in. In this work, I expand controllers based on deep learning estimates of physiological state to operate in the expansive regime of human activities while also generalizing to novel activities. I show that, when deployed on a hip and knee exoskeleton, these controllers can augment human performance across tasks and time-varying conditions, promising task-agnostic and user-independent control. The process of training these models, however, is device-specific and highly costly in terms of resources and personnel. This threatens to negate its potential for real-world viability. In this work, I also present a novel framework that uses deep domain adaptation to reduce or eliminate the need for costly device-specific data. When deployed on an exoskeleton in real-time, these data-limited models still achieved performance comparable to models with complete access to costly data. These advances are a promising step toward enabling exoskeletons to break the critical task- and device-specific barriers to everyday, outside-laboratory use, and thereby achieve their transformative potential to aid ordinary people.