SUMMARY
Multi-robot systems are finding their way into an increasing number of applications such as precision agriculture, environmental monitoring and search and rescue. These applications typically involve the long-term deployment of heterogeneous robot teams in unstructured dynamic environments where the robots are required to collaborate in executing a variety of tasks. Consequently, from a control-theoretic point of view, many modelling assumptions which were well-suited for static laboratory-like settings are now bound to be violated as the robots encounter unforeseen circumstances. Moreover, by definition, achieving long-duration autonomy requires the elimination of human intervention which typically occurs due to failures. These insights necessitate the development of new frameworks for the coordination of robot teams which emphasize both adaptiveness and robustness with respect to environmental disturbances, and that are constraint-driven to ensure the prevention of possible failures and safety violations. Toward this end, we demonstrate how learning methods can be intertwined with constraint-driven control-theoretic approaches in the development of frameworks geared towards the long-duration autonomy of heterogeneous robots teams, i.e., teams in which each robot inherently exhibits different capabilities. Specifically, we begin by introducing a framework for robust control synthesis designed to leverage learning approaches for disturbance estimation. Moreover, since coordinating a robot team inherently requires the assignment of tasks to robots, we then introduce a heterogeneity model for robotic teams and a task allocation and execution method which utilizes data to adaptively update the suitability of each robot towards the tasks at hand on-the-fly. Finally, we discuss how reinforcement learning can be combined with the developed control-synthesis methods to safely learn new tasks.