Meta wants to build a universal language translator

During Inside the Lab: Building for the Metaverse with AI On Wednesday, Meta CEO Mark Zuckerberg didn’t just lay out his company’s no-holds-barred vision for the future, dubbed the Metaverse. He also revealed that Meta’s research division is working on a universal speech translation system that could streamline user interactions with AI within the company’s digital universe.

“The big goal here is to build a universal model that can integrate knowledge across all modalities…all of the information that’s captured by rich sensors,” Zuckerberg said. “This will enable a vast scale of predictions, decisions, and generation as well as entirely new methods of training architectures and algorithms that can learn from a wide and diverse range of different inputs.”

Zuckerberg noted that Facebook has continually worked to develop technologies that allow more people around the world to access the Internet and is confident that these efforts will translate to the metaverse as well.

“That’s going to be especially important when people start teleporting through virtual worlds and having experiences with people from different backgrounds,” he continued. “Now we have the opportunity to improve the Internet and set a new normal where we can all communicate with each other, no matter what language we speak or where we come from. And if we succeed, this is just one example of how AI can help bring people together on a global scale.”

Meta’s plan is twofold. First, Meta is developing No Language Left Behind, a translation system capable of learning “all languages, even if there’s not a lot of text available to learn”, according to Zuckerberg. “We create a single model that can translate hundreds of languages ​​with industry-leading results and most language pairs – everything from Austrian to Ugandan to Urdu.”

Second, Meta wants to create an AI Babelfish. “The goal here is instant speech-to-speech translation in any language, even those that are primarily spoken; the ability to communicate with anyone in any language,” Zuckerberg promised. “It’s a superpower that people have always dreamed of and AI is going to deliver it in our lifetime.”

These are big claims from a company whose machine-generated domain does not extend below the belt line, however, Facebook-cum-Meta has a long and broad track record in developing AI. In the past year alone, the company has announced advances in self-supervised learning techniques, natural language processing, multimodal learning, text-based generation, AI understanding of standards social networks and even built a supercomputer to help with his machine learning research.

The company still faces the major hurdle of data scarcity. “Machine translation (MT) systems for text translations typically rely on learning from millions of sentences of annotated data,” Facebook AI Research wrote in a blog post on Wednesday. “For this reason, machine translation systems capable of high-quality translations have only been developed for the handful of languages ​​that dominate the web.”

Translating between two languages ​​other than English is even more difficult, according to the FAIR team. Most machine translation systems first convert one language to text, then translate it to the second language before converting text back to speech. This delays the translation process and creates an inordinate reliance on the written word, limiting the effectiveness of these systems for predominantly oral languages. Direct speech-to-speech systems, like the ones Meta is working on, would not be hampered in this way, resulting in a faster and more efficient translation process.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission.