Researchers developed an augmented tongue for speech therapy

0
636
Augmented tongue

Augmented tongue

Researchers developed an augmented tongue that can display the movements of our own tongues in real time. The movements captured using an ultrasound probe placed under the jaw. These movements processed by a machine learning algorithm that controls an articulatory talking head.

The GIPSA-Lab (CNRS/Université Grenoble Alpes/Grenoble INP) and at INRIA Grenoble Rhône-Alpes has developed this system.

Also, the face and lips, this avatar show the tongue, palate and teeth, which usually hidden inside the vocal tract. This “visual biofeedback” system could be used for speech therapy and for learning foreign languages.

For a person with an articulation disorder, speech therapy partly uses repetition exercises. The practitioner qualitatively analyzes the patient’s pronunciations and orally explains, using drawings, how to place articulators, particularly the tongue. Some patients are generally unaware of. How effective therapy is depending on how well the patient can integrate what they are told.

At this stage the “visual biofeedback” systems can help. They let patients see their articulatory movements in real time, and how their tongues move. So that, they aware of these movements and can correct pronunciation problems faster.

Ultrasound

For several years, researchers using ultrasound to design a bio-feedback system. In this new work, the researchers propose to improve this visual feedback by automatically animating an articulatory talking head in real time from ultrasound images.

The strength of this new system lies in a machine learning algorithm that researchers working on for several years. This algorithm can process articulatory movements that users cannot achieve when they start to use the system. This property indispensable for the targeted therapeutic applications.

The algorithm exploits a probabilistic model based on a large articulatory database acquired from an “expert” speaker capable of pronouncing all the sounds in one or more languages. This model automatically adapted to the morphology of each new user, over the course of a short system calibration phase, during which the patient must pronounce a few phrases.

This system now tested in a simplified version in a clinical trial for patients who have had tongue surgery. The researchers also developing another version of the system, where the articulator talking head automatically animated, but directly by the user’s voice.

More information: [Speech Communication]