We’re sharing this article from Bionics for Everyone, a super-informative Canadian site that we recently discovered. Founder Wayne Williams (who authored this post) created Bionics for Everyone to educate people about the potential of bionic technologies, with the goal of making them universally available.
This particular article focuses on the current state of the art in brain-controlled bionic hands. However, similar interfaces are also being developed for lower-limb prosthetic devices, so the technology may eventually apply to both upper- and lower-limb amputees.
You can read the original article at Bionics for Everyone. You’ll also find loads of other information there, including pricing comparisons and product demos. Our thanks to Wayne Williams for sharing this content with us.
Our brains are intimately connected to our bodies by a sophisticated network of nerves and touch receptors. Even the simple act of grasping an object involves a fully integrated system of remarkable complexity and nuance. It was only a matter of time before we sought to incorporate our bionic hands into this system. That time has arrived, and our key to success is an advanced neural interface.
The Limitations of Myoelectric Control Systems
Myoelectric arms/hands use sensors placed on the skin’s surface to detect muscle movements in the residual limb. One type of movement causes the bionic fingers to open; another causes them to close. This type of control system is unquestionably useful. For example, consider this young lady, who lost both her hands to meningitis when she was only 15 months old. Here she is pouring a glass of water using her myoelectric Hero Arms from Open Bionics:
This and the many tasks she demonstrates in other videos would be extremely difficult, if not impossible, without this technology. But here are a couple of GIFs demonstrating some myoelectric shortcomings at the Cybathlon event in 2016. In this first one, the user has difficulty grasping a cone-shaped object. In the second example, the user is unable to pick up a clothespin in a position that he can use to pin an article of clothing to the clothesline.
You may think these are merely examples of poor electromechanical dexterity, but there is more to it than this. The bionic hand being used here has a feature called “automatic finger stalling,” which causes fingers to stall when they encounter a certain amount of resistance from an object, but allows the hand’s other fingers to continue to close until they also meet resistance. This is very similar to how our natural hands behave when grasping an irregular shaped object.
Now, picking up a cone is not easy. But the inability of the user to make a better attempt at this task is in large part a myoelectric control problem. Either she lacks the training to attempt a grip where finger stalling will help her, or she knows what to do but can’t make her bionic hand do it.
The failed attempt to grasp the clothespin is even more telling. When we grab something like a clothespin with our natural hand, we don’t attempt to pick it up so precisely. We don’t need to. As long as we grab it in the right general area, we can use our sense of touch combined with multiple small adjustments to achieve the perfect grip. We don’t even need to look at it as we do this — our touch of sense alone allows us to complete the task.
The motivation for using an advanced neural interface is to connect a bionic hand to the brain in a way that restores this natural intuitive control and sensory feedback.
Scientists are also trying to make bionic control systems more reliable. The fact is, using skin surface sensors to detect the electrical signals is incredibly challenging. Inadvertent muscle movements can trigger the wrong commands. Sensors can shift away from the muscles they are trying to monitor or lose contact with the skin. Changes in temperature, humidity, and the state of the residual limb can all impact the quality of myoelectric signals.
Repeated failures like this shake the user’s confidence in a prosthetic, making it less useful than it should be.
How Neural Interfaces Are Used in Bionic Hands
We would explain this to you in text, but there happens to be a PBS video that does a great job of this already. It involves an early prototype built by Case Western Reserve University:
As the video shows, the sensors on the bionic hand transmit signals to electrodes that have been surgically embedded in the user’s arm. The electrodes stimulate the nerves that they are attached to, which then transmit the requisite information to the brain.
That’s how the sensory feedback portion of the system works. The methods used to exert control over the bionic hand vary. In some cases, a myoelectric control system is still used. The advantage of this hybrid model over a traditional myoelectric system is that the user’s attempted actions are better informed by the sensory feedback.
More recent prototypes use embedded electrodes for two-way communication. Because commands can be sent from the brain directly to the embedded electrodes via nerves, and then from the electrodes to the bionic hand’s control system, this should eliminate many of the problems caused by using myoelectric sensors.
1) Eliminate Invasive Surgery
Surgery is surgery, with all its incumbent risk of scarring and infection. It is also expensive. Fortunately, researchers at the University of Pittsburgh recently discovered that existing spinal cord stimulators can be used to produce a sense of touch in missing limbs.
With 50,000 people already receiving implants of these stimulators every year in the U.S. alone, that means there are doctors all over the world who are already trained in this procedure. Even more important, it is a simple outpatient procedure, thereby avoiding the potential complications of more invasive surgery.
There is still a fair bit of work to be done before this can be used as a neural interface for a bionic hand, but one patient said about her experience with the new technique: “It’s amazing to feel things move. I know there’s no arm there, but I can feel it. It’s pretty exciting.”
2) Improve Signal Processing and Interpretation
Signals sent from bionic sensors to electrodes must be processed and interpreted to determine the correct stimulation of the attached nerves. Similarly, commands sent from the brain to the nerves must be processed and interpreted to determine the signals that need to be sent to the bionic hand’s control system.
All of this is not only processing-intensive; it also requires sophisticated artificial intelligence. Currently, the AI routines are still too specific. That is, they require too much task-specific training. For example, a baseball and an orange are quite similar. The baseball has seams, whereas the orange is heavier with pitted skin. But both objects are a similar size and shape. When neural interface AI routines have been trained to handle a baseball, they should know roughly how to handle an orange without having to be retrained all over again.
3) Reduce Costs
In what is a near-universal complaint about all bionic technologies, we need to get the costs down to make the technologies more accessible. If we need a little hope on this goal, look no further than the cost of traditional myoelectric systems. A few years ago, even the most basic myoelectric arm costs tens of thousands of dollars. Now there are 3D-printed myoelectric devices available for a fraction of that price.
Right now, that is not the case with many similar tasks, which imposes too high a training cost on all involved. There is no easy solution to this problem. We just have to keep pushing.