We describe a reusable Web component for capturing talk about images. A speaker is prompted with a series of images and talks about each one while adding gestures. Others can watch the audio-visual slideshow, and navigate forwards and backwards by swiping on the images. The component supports phrase-aligned respeaking, translation, and commentary. This work extends the method of Basic Oral Language Documentation by prompting speakers with images and capturing their gestures. We show how the component is deployed ina mobile app for collecting and sharing know-how which was developed in consultation with indigenous groups in Taiwan and Australia. We focus on food preparation practices since this is an area where people are motivated to preserve and disseminate their cultural and linguistic heritage.
|Title of host publication||LREC 2018 Workshop|
|Subtitle of host publication||CCURL2018: Sustaining Knowledge Diversity in the Digital Age|
|Editors||Claudia Soria, Laurent Besacier, Laurette Pretorius|
|Place of Publication||Paris, France|
|Publisher||European Language Resources Association (ELRA)|
|Number of pages||8|
|Publication status||Published - 12 May 2018|
|Event||11th International Conference on Language Resources and Evaluation, LREC 2018 - Miyazaki, Japan|
Duration: 7 May 2018 → 12 May 2018
|Conference||11th International Conference on Language Resources and Evaluation, LREC 2018|
|Period||7/05/18 → 12/05/18|
Bettinson, M., & Bird, S. (2018). Image-Gesture-Voice: a Web Component for Eliciting Speech. In C. Soria, L. Besacier, & L. Pretorius (Eds.), LREC 2018 Workshop: CCURL2018: Sustaining Knowledge Diversity in the Digital Age (pp. 1-8). European Language Resources Association (ELRA).