AI is slowly finding its way to prosthetic limbs
Artificial Intelligence (AI) is transforming our lives; and not in the mad scientist or apocalyptic science fiction kind of way. Well, at least not yet. From home automation to driverless cars, AI is at the forefront of many a pioneering technology—all aimed at making life easier.
Now, AI is slowly finding its way to prosthetic limbs. Again, it’s not at the level you may have seen in movies (like Dr. Octopus in Spider-Man and his metallic tentacles), but marked improvements in prosthetic limbs and their functioning have been made.
Researchers at Newcastle University, United Kingdom, have engineered a limb—a hand, specifically—that can “see” objects for itself.
This is enabled by a camera affixed to the hand’s knuckles, that takes in the object close to the hand and the hand reacts by grabbing the object—all in a matter of milliseconds. According to the abstract of the study submitted to the Journal of Neural Engineering, the engineers used a deep learning-based artificial vision system to improve the hand’s functionality.
The engineers trained a neural network structure (a system modelled along the lines of the human nervous system) with pictures of about 500 objects that can be grasped. Objects were classified into four different classes of grasps and each object had 72 images to help the neural network identify it in detail.
When tested on two amputee volunteers, they were able to grasp and move the targeted objects with an 88% success rate, the abstract noted. The user can also override the bionic hand’s functioning and control grasping actions by themselves.
This study could pave the way to much more advanced breakthroughs in prosthetic limbs, such as connecting them to nerve endings to enable direct control over the limb. Now, that sounds like something out of Spider-Man, after all.
> Camera affixed to the knuckles takes a picture of the object in front
> Neural networks help the limb identify the object and grasp it
> Reaction time is just a few milliseconds
> Programmed to perform four different “grasps”—picking up a cup, holding a TV remote, gripping objects with thumb and two fingers, or a pinched thumb and first finger