Abstract
Training a high-accuracy dependency parser requires a large treebank. However, these are costly and time-consuming to build. We propose a learning method that needs less data, based on the observation that there are underlying shared structures across languages. We exploit cues from a different source language in order to guide the learning process. Our model saves at least half of the annotation effort to reach the same accuracy compared with using the purely supervised method.
Original language | English |
---|---|
Title of host publication | Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing |
Subtitle of host publication | Proceedings of the Conference: Volume 2: Short Papers |
Place of Publication | Beijing, China |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 845-850 |
Number of pages | 6 |
Volume | 2 |
ISBN (Electronic) | 9781941643730 |
DOIs | |
Publication status | Published - Jul 2015 |
Externally published | Yes |
Event | 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL-IJCNLP 2015 - Beijing, China Duration: 26 Jul 2015 → 31 Jul 2015 |
Conference
Conference | 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL-IJCNLP 2015 |
---|---|
Country/Territory | China |
City | Beijing |
Period | 26/07/15 → 31/07/15 |