TinyML Based Hand Gesture Recognition for Prosthetic Control*

Session Number

2

Advisor(s)

Phadmakar Patankar, IMSA

Location

A129

Discipline

Engineering

Start Date

15-4-2026 11:10 AM

End Date

15-4-2026 11:55 AM

Abstract

This project investigates how machine learning and computer vision can be used to recognize human hand gestures and translate them into control signals for a prosthetic or robotic hand. Using a webcam and computer vision tools such as MediaPipe, a program was developed to track hand landmarks and collect gesture data in real time. These landmark positions are used to train a TinyML model capable of distinguishing between multiple hand gestures, including open hand, fist, pointing, thumbs up or down, and other common hand positions. The goal of this project is to demonstrate how gesture recognition can provide a more natural method of controlling prosthetic devices. By interpreting hand movement through software, the system can eventually be connected to a robotic hand driven by servo motors, allowing gestures to be translated into physical motion. This research explores the potential of combining artificial intelligence, computer vision, and engineering to create more intuitive prosthetic control systems and improve accessibility for individuals who rely on assistive technologies.

Share

COinS
 
Apr 15th, 11:10 AM Apr 15th, 11:55 AM

TinyML Based Hand Gesture Recognition for Prosthetic Control*

A129

This project investigates how machine learning and computer vision can be used to recognize human hand gestures and translate them into control signals for a prosthetic or robotic hand. Using a webcam and computer vision tools such as MediaPipe, a program was developed to track hand landmarks and collect gesture data in real time. These landmark positions are used to train a TinyML model capable of distinguishing between multiple hand gestures, including open hand, fist, pointing, thumbs up or down, and other common hand positions. The goal of this project is to demonstrate how gesture recognition can provide a more natural method of controlling prosthetic devices. By interpreting hand movement through software, the system can eventually be connected to a robotic hand driven by servo motors, allowing gestures to be translated into physical motion. This research explores the potential of combining artificial intelligence, computer vision, and engineering to create more intuitive prosthetic control systems and improve accessibility for individuals who rely on assistive technologies.