Image-Gesture-Voice: A Web Component for Eliciting Speech

Mat Bettinson, Steven Bird

    Research output: Chapter in Book/Report/Conference proceedingConference Paper published in Proceedingspeer-review

    130 Downloads (Pure)


    We describe a reusable Web component for capturing talk about images. A speaker is prompted with a series of images and talks about each one while adding gestures. Others can watch the audio-visual slideshow, and navigate forwards and backwards by swiping on the images. The component supports phrase-aligned respeaking, translation, and commentary. This work extends the method of Basic Oral Language Documentation by prompting speakers with images and capturing their gestures. We show how the component is deployed ina mobile app for collecting and sharing know-how which was developed in consultation with indigenous groups in Taiwan and Australia. We focus on food preparation practices since this is an area where people are motivated to preserve and disseminate their cultural and linguistic heritage.
    Original languageEnglish
    Title of host publicationLREC 2018 Workshop
    Subtitle of host publicationCCURL2018: Sustaining Knowledge Diversity in the Digital Age
    EditorsClaudia Soria, Laurent Besacier, Laurette Pretorius
    Place of PublicationJapan
    PublisherEuropean Language Resources Association (ELRA)
    Number of pages8
    ISBN (Print)979-10-95546-22-1
    Publication statusPublished - 12 May 2018
    Event11th International Conference on Language Resources and Evaluation, LREC 2018 - Miyazaki, Japan
    Duration: 7 May 201812 May 2018


    Conference11th International Conference on Language Resources and Evaluation, LREC 2018


    Dive into the research topics of 'Image-Gesture-Voice: A Web Component for Eliciting Speech'. Together they form a unique fingerprint.

    Cite this