A downloadable project

A winner at meta sponsored XR Hach  

Inspiration

In a world where technology is rapidly evolving, with devices becoming smaller, smarter, and more capable, we saw a significant opportunity to enhance communication for the deaf community. With the Meta Quest 3, a lightweight and highly capable virtual reality (VR) headset, presents a unique chance to develop a solution that can enable real-time translation between any spoken language and American Sign Language (ASL), and vice versa.

What it does

At the core of this solution is the ability to capture the speech of the hearing individual and seamlessly convert it into fluid sign language, displayed within the MR environment. Conversely, the headset will also translate the user's hand gestures into spoken language, enabling a natural and bidirectional flow of communication.

How we built it

We used Unity (2023.3) with the Built-in Render Pipeline to set up the project. We integrated the latest available tools, including the Meta SDK with building blocks, especially the Voice SDK.

For hand movements, we created an animation library that includes all American Sign Language (ASL) letters and some of the most commonly used words.

By converting incoming speech to text, we find the corresponding hand gesture for each word and display it to the user.

Resources

Challenges we ran into

  • we had serious problems with the Voice SDK ,with it not working on the meta quest 3 when we build the project we asked the mentors they tried to help us and we got it working after wasting a hole 24 hours
  • we tried the chat gpt4o plugin to give us the hand movements but we couldn't get to work correctly so we figured out that we need to create our own model and teach it

Accomplishments that we're proud of

  • making the world easier and more accessible for some people
  • getting over the problems we had and finishing the project

What we learned

  • A lot of things about VR , because it's our first VR hackathon
  • Designing for education requires a more structured approach and knowledge of the field and existing conventions.

What's next for SignSync

  • Adding a light device on the headset to be a stereo microphone and a speaker
  • Implementing sign to speech functionality from and to any language
  • Implementing multi-speaker recognition by dynamically displaying individual dialogs and corresponding gestures for each speaker, using the input from a stereo microphone to distinguish between them.
  • Ambient sounds , capturing ambient sounds that comes from out of view and directing the user towards it

Leave a comment

Log in with itch.io to leave a comment.