Four Ways AI Is Making Prosthetic Tech Smarter

Artificial intelligence is all over the news lately, and it’ll soon be all over the journal Prosthesis. The publication is soliciting papers for an upcoming special edition focused on “the latest research, technologies, and innovations in the field of AI-assisted prosthetics and rehabilitation.” That issue won’t be published for quite a while, though, and we’re impatient. So we did a quick survey of the scientific literature and scoured our own archives to sketch, in very broad strokes, a picture of the current frontiers in AI-enhanced prosthetic tech.

When we say “artificial intelligence,” by the way, we’re not merely referring to standard data gathering and analysis. We’re referring to systems that have the ability to get smarter and change their behavior over time—systems that can “learn,” in essence. Prosthetic devices that are equipped with this capability hold the potential to integrate more seamlessly with our own selves, adapting along with us as our habits and bodies change.

We sorted the research into four main buckets, corresponding to varying applications of AI within prosthetic devices. We begin with:

1. AI That Helps Prosthetics Read and React to the External Environment

One type of AI operates in a manner somewhat akin to a self-driving car: It gathers information from the outside world, learns to identify potential hazards, and makes real-time adjustments to promote safe, comfortable interactions. Exhibit A in this class might be the prosthetic leg that’s under development at North Carolina State University. This experimental device uses computer vision and other sensors to understand the surface that’s being walked on; the AI evaluates that input and uses it to maintain a more natural gait. An early prototype was calibrated to distinguish among six surface environments: tile, brick, concrete, grass, upstairs, and downstairs. This is different from the type of AI in a microprocessor knee because the NC State leg responds to fixed, external stimuli, whereas an MPK acts upon the spatial coordinates of the prosthesis itself. Both prostheses are making similar decisions, but they’re crunching different data sets to get there.

“Smart skin” is another example of this type of AI, with potential applications for upper-limb prosthetic devices. One leading prototype combines touch sensitivity with an onboard learning system that helps the skin react appropriately to stimuli. Another uses tiny, multlayered sensors in artificial fingertips to read force, temperature, and moisture and convert that data into electrical signals. And a third experimental smart-skin integrates five layers of sensory fabric with a neural network that supports real-time perception of the surface that’s being touched.

2. AI That Helps Prosthetics Read and React to the Wearer’s Body State

The most pertinent example in this AI category are smart sockets. These devices are equipped with sensors that detect volume changes in residual limb over time, then automatically adjust the socket to maintain a secure, comfortable fit. All use some form of AI that make them responsive to the user’s tendencies, enabling the socket to “learn” whether you prefer a tighter or looser fit, or to anticipate individual patterns related to your behavior. These devices will respond one way for an active person who walks four or five miles a day, but another way for a more sedentary person who only averages two or three thousand steps.

3. AI That Helps Prosthetics Read and React to the Wearer’s Intent

This seems to be the most robust frontier for AI-enhanced prosthetics. One high-profile example is the Esper Hand, which made the cover of Time Magazine a few months ago. It uses standard electromyographic sensors that monitor muscle impulses and translate them into gestures, grips, and other motions. However, the Esper includes AI modules that “learn,” over time, to translate those signals with ever-increasing precision. As its predictions become faster and more accurate, the hand becomes easier to use. Research at the University of Newfoundland, University of Erlangen-Nuremberg, and elsewhere is rooted in the same approach, combining standard electromyographic interfaces with AI enhancements.

The Utah Bionic Leg builds intent detection into a lower-limb prosthesis. It goes beyond the type of AI that exists in a microprocessor knee, which monitors the spatial coordinates of the prosthesis, crunches the data, and make ongoing adjustments to maintain natural gait, optimize energy efficiency, and avoid unstable positions that might lead to a fall. In the Utah Leg’s more powerful AI system, additional sensors gather input from the muscles in the residual limb and correlate those signals to the user’s intent. That extra layer of data supports even more natural, intuitive motion than a standard MPK.

A different model bypasses the muscles altogether and taps directly into the peripheral nerves. Researchers at the University of Minnesota have achieved good results with a prototype limb that uses a nerve implant to read and interpret signals coming directly from the brain. Over time, the onboard AI learns to correlate specific nerve signals with particular gestures, enabling users to exert fine motor control over individual fingers. The nerve-interface approach is also at the heart of new research involving patients who use osseointegrated prosthetic limbs.

4. AI That Helps Prosthetics Re-Calibrate Themselves

One stumbling block of neuroprosthetic devices that involve implants is that they require periodic tuning. Muscle and nerve tissues change over time, which inevitably affects the performance of the prosthesis. Moreover, the implants or other sensing componentry may undergo changes, necessitating readjustment. These and other factors can create slippage in the device’s usefulness, requiring continuous re-calibration—which in turn can cost time, money, and aggravation.

A team of researchers at ETH Zürich is working on an AI system that can automatically reset calibration parameters, reducing the downtime and effort involved in human-mediated adjustments. We won’t pretend to understand everything they’re talking about here; when one of a paper’s core ideas is described as “Gaussian process-based Bayesian optimization,” we’ve far exceeded the limits of our understanding. But we include the example here as an illustration of the type of AI research that hasn’t made many headlines yet—but which might turn up in the special edition of Prosthesis a year or two from now.

Exit mobile version