ADVERTISEMENT
Meta, the parent company of Facebook, has unveiled a significant breakthrough in artificial intelligence with the launch of Omnilingual ASR, an open-source automatic speech recognition (ASR) system capable of understanding and transcribing more than 1,600 spoken languages. The innovation includes around 500 low-resource languages that have never before been supported by AI-based transcription tools.
Developed by Meta’s Fundamental AI Research (FAIR) team, the system aims to dramatically expand global access to speech technology and foster inclusivity across linguistic communities that have traditionally been excluded from AI developments.
Meta’s AI chief, Alexandr Wang, announced the advancement on X (formerly Twitter), saying, “Meta Omnilingual ASR expands speech recognition to 1,600+ languages, including 500 never before supported, as a major step towards truly universal AI. We are open-sourcing a full suite of models and a dataset.”
Meta Omnilingual ASR expands speech recognition to 1,600+ languages, including 500 never before supported, as a major step towards truly universal AI.
— Alexandr Wang (@alexandr_wang) November 10, 2025
We are open-sourcing a full suite of models and a dataset: https://t.co/AIaYrqSF0h https://t.co/qC79jrF7BY
The Omnilingual ASR system tackles a long-standing imbalance in the AI ecosystem, where most speech recognition platforms prioritise dominant global languages. By offering support for hundreds of underrepresented tongues, Meta aims to narrow the digital linguistic gap and improve accessibility worldwide.
At the core of this new system lies the Omnilingual wav2vec 2.0 model — a multilingual speech engine scaled to seven billion parameters. It was trained using publicly available datasets as well as speech samples collected from diverse linguistic communities in collaboration with organisations such as the Mozilla Foundation’s Common Voice and Lanfrica. The inclusion of native speakers helped capture authentic accents, dialects and speech variations across regions.
Meta’s internal evaluation showed strong accuracy across most languages: over 95% of high and medium-resource languages recorded a character error rate below 10%, while only 36% of low-resource languages achieved the same benchmark. The results underline the persistent challenge of developing accurate AI tools for lesser-documented languages.
By releasing Omnilingual ASR as open source, Meta hopes to attract global collaboration from researchers, developers and organisations.