Bayesian Reinforcement Learning for Prosthetic Hand Control
This article presents a unified control framework that fuses EMG signals, proprioceptive sensors, and adaptive haptic feedback to enable fine-grained manipulation with prosthetic hands.
Why it matters
This research advances the state-of-the-art in prosthetic hand control by integrating adaptive haptic feedback and reinforcement learning to enable more natural and intuitive manipulation.
Key Points
- 1Bayesian reinforcement learning agent estimates hand posture trajectories and learns an optimal policy to maximize task completion and tactile fidelity
- 2Sparse Gaussian process representation keeps the controller computationally lightweight while retaining expressive power for high-dimensional grasp dynamics
- 3System integrates 14-channel EMG sleeve, 12-actuator vibrotactile patch, and STM32 Cortex-M7 processor running real-time control
- 4Evaluated on 30 amputees, reducing task completion time by 27% and increasing grasp force modulation accuracy to 89.4% compared to a deterministic controller
Details
The core innovation is a Bayesian reinforcement learning engine that learns the joint distribution of desired hand posture, tolerable contact forces, and user-specific haptic sensitivity, while simultaneously tuning the vibrotactile stimulus in real-time. By employing a sparse Gaussian process policy with spectral EMG features, the controller remains computationally lightweight (5 ms inference) yet retains expressive power for high-dimensional grasp dynamics. The system was evaluated on a cohort of 30 amputees performing industrial-grade grasping tasks, quantitatively reducing task completion time by 27% and increasing grasp force modulation accuracy to 89.4% relative to a state-of-the-art deterministic controller.
No comments yet
Be the first to comment