Cross-lingual word embeddings for low-resource language modeling

Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, Trevor Cohn

    Research output: Chapter in Book/Report/Conference proceedingConference Paper published in Proceedings

    Abstract

    Most languages have no established writing system and minimal written records. However, textual data is essential for natural language processing, and particularly important for training language models to support speech recognition. Even in cases where text data is missing, there are some languages for which bilingual lexicons are available, since creating lexicons is a fundamental task of documentary linguistics. We investigate the use of such lexicons to improve language models when textual training data is limited to as few as a thousand sentences. The method involves learning cross-lingual word embeddings as a preliminary step in training monolingual language models. Results across a number of languages show that language models are improved by this pre-training. Application to Yongning Na, a threatened language, highlights challenges in deploying the approach in real low-resource environments.

    Original languageEnglish
    Title of host publicationLong Papers - Continued
    Place of PublicationValencia, Spain
    PublisherAssociation for Computational Linguistics (ACL)
    Pages937-947
    Number of pages11
    Volume1
    ISBN (Electronic)9781510838604
    Publication statusPublished - 1 Jul 2017
    Event15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Valencia, Spain
    Duration: 3 Apr 20177 Apr 2017

    Conference

    Conference15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017
    CountrySpain
    CityValencia
    Period3/04/177/04/17

    Fingerprint Dive into the research topics of 'Cross-lingual word embeddings for low-resource language modeling'. Together they form a unique fingerprint.

  • Cite this

    Adams, O., Makarucha, A., Neubig, G., Bird, S., & Cohn, T. (2017). Cross-lingual word embeddings for low-resource language modeling. In Long Papers - Continued (Vol. 1, pp. 937-947). Association for Computational Linguistics (ACL).