TY - GEN
T1 - Bootstrapping techniques for polysynthetic morphological analysis
AU - Lane, William
AU - Bird, Steven
N1 - Funding Information:
We are grateful for the support of the Warddeken Rangers of West Arnhem. This work was covered by a research permit from the Northern Land Council, and was sponsored by the Australian government through a PhD scholarship, and grants from the Australian Research Council and the Indigenous Language and Arts Program. We are grateful to four anonymous reviewers for their feedback on an earlier version of this paper.
Publisher Copyright:
© 2020 Association for Computational Linguistics
PY - 2020/7
Y1 - 2020/7
N2 - Polysynthetic languages have exceptionally large and sparse vocabularies, thanks to the number of morpheme slots and combinations in a word. This complexity, together with a general scarcity of written data, poses a challenge to the development of natural language technologies. To address this challenge, we offer linguistically-informed approaches for bootstrapping a neural morphological analyzer, and demonstrate its application to Kunwinjku, a polysynthetic Australian language. We generate data from a finite state transducer to train an encoder-decoder model. We improve the model by" hallucinating" missing linguistic structure into the training data, and by resampling from a Zipf distribution to simulate a more natural distribution of morphemes. The best model accounts for all instances of reduplication in the test set and achieves an accuracy of 94.7% overall, a 10 percentage point improvement over the FST baseline. This process demonstrates the feasibility of bootstrapping a neural morph analyzer from minimal resources.
AB - Polysynthetic languages have exceptionally large and sparse vocabularies, thanks to the number of morpheme slots and combinations in a word. This complexity, together with a general scarcity of written data, poses a challenge to the development of natural language technologies. To address this challenge, we offer linguistically-informed approaches for bootstrapping a neural morphological analyzer, and demonstrate its application to Kunwinjku, a polysynthetic Australian language. We generate data from a finite state transducer to train an encoder-decoder model. We improve the model by" hallucinating" missing linguistic structure into the training data, and by resampling from a Zipf distribution to simulate a more natural distribution of morphemes. The best model accounts for all instances of reduplication in the test set and achieves an accuracy of 94.7% overall, a 10 percentage point improvement over the FST baseline. This process demonstrates the feasibility of bootstrapping a neural morph analyzer from minimal resources.
UR - http://www.scopus.com/inward/record.url?scp=85106998178&partnerID=8YFLogxK
U2 - 10.18653/v1/2020.acl-main.594
DO - 10.18653/v1/2020.acl-main.594
M3 - Conference Paper published in Proceedings
AN - SCOPUS:85106998178
VL - 1
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 6652
EP - 6661
BT - ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference
A2 - Jurafsky, Dan
A2 - Chai, Joyce
A2 - Schluter, Natalie
A2 - Tetreault, Joel
PB - Association for Computational Linguistics (ACL)
CY - Pennsylvania
T2 - 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020
Y2 - 5 July 2020 through 10 July 2020
ER -