In the past two decades, the robotic community has made tremendous progress on physically collaborative robots. The main motivation is to combine the robotic capacities to perform repetitive and burdensome tasks accurately with human capabilities to make high-level decisions. Furthermore, human-robot interactions are made more seamless with the introduction of cognitive capacities to the robot, such as human-intention recognition and decision making under uncertainty; the higher autonomy of the robot reduces the human control burden of supervision. For instance, robots that recognize human intentions through interaction (using haptics, kinematics, gestures, etc.) do not need complicated interfaces where the human is required to constantly provide control inputs.Despite the success of this new paradigm in manufacturing, tele-operation, and navigation, there has been a sparsity of work to exploit such shared autonomy in prosthetic devices. Rather than treating the prosthetic device as an active agent with potential cognitive capabilities, the device is considered as a passive tool. Most control methods rely on decoding the EMG (Electromyography) signals from the muscles to provide the user with an interface without consideration for the underlying human intention. In this fashion, the human user is required to make muscle contractions whenever a movement is desired, which results in a high control burden. Moreover, the performance of these methods are limited since the controllers are tuned for specific arm configurations and do not generalize well; mainly due to the fact that the EMG signals are sensitive to the arm configuration. Therefore, most of the work on prosthetic devices has been concentrated on hands for grasping and manipulation purposes, and the research on control of the prosthetic arm is quite limited. Finally, physical interaction with the environment using prosthetic arms has remained an untackled challenge.The main motivation of this proposal is to consider the prosthetic arm as a physically collaborative robot with cognitive capacities: a robotic arm that recognizes the human intention (e.g., the target of a reaching movement) from human partial kinematics; e.g., the 3 degrees of freedom in the shoulder. This approach allows us to extend the effective solutions obtained in collaborative robotics to the field of prosthetics. We will also leverage the fact that amputees can learn new skills using the joints that are kinematically redundant in normal individuals. Therefore, it is hypothesized that those joints may potentially carry information about the intended target of a reaching motion. Such devices can benefit from a co-adaptation mechanism where the human learns how to exhibit his/her intention in his/her kinematic behavior and the robot learns how to infer the intention from those kinematic behaviors. To investigate this, we propose novel experimental setups and we plan to collect human demonstrations that will be used to develop computational models for human goal-oriented behavior using bodily redundancies for integration in a prosthesis. Finally, we will explore the compliant-control and haptic communication (e.g., the internal forces between the device and the human user) in robotic prosthetic arms. Beside kinematics, the haptic information can be used as a means to infer the human intentions with higher precision and perform the task with higher level of assistance.
Location: ISIR, Sorbonne Université, Paris, France [May 2020 – Nov. 2021]
Supervisors: Prof. Nathanael Jarrassé and Prof. Guillaume Morel
Funding: Individual fellowship grant awarded by Swiss National Science Foundation
My role in this project: Scientific researcher