ProjectsProject Details

User Specific Speech Recognition For Controlling a 3D Printed Prosthetic Hand

Project ID: 5875-1-21
Year: 2021
Student/s: Noa Tykochinsky, Itay Wengrowicz
Supervisor/s: Shunit Polinsky

In this work we provide a solution for a voice controlling algorithm for a 3D printed prosthetic hand.
The goal was to create a cheap and accessible solution, for an algorithm which will recognize and verify the voice of the prosthetic hand user. The system will recognize the words the user is saying in real-time and will be able to detect the activation words and keywords which represent the hand movement.
The entire processing time, starting from the moment the audio input was received until the hand movement result, takes about 1.5 seconds and the risk for a false-positive result stand by less than 2%.
In this project we succeeded to meet our goals and created a free-to-use algorithm which recognizes the prosthetic hand user and can detect voice activity to start processing the input audio and detect the activation words and keywords. All in real-time.

Poster for User Specific Speech Recognition For Controlling a 3D Printed Prosthetic Hand
Collaborators:
Logo of Haifa-3D Collaborator
Haifa-3D