GENERAL ENQUIRIES: Tel: + 27 12 841 2911 | Email: callcentre@csir.co.za

Show simple item record

dc.contributor.author Van Heerden, C
dc.contributor.author Kleynhans, N
dc.contributor.author Barnard, E
dc.contributor.author Davel, M
dc.date.accessioned 2012-07-03T15:01:49Z
dc.date.available 2012-07-03T15:01:49Z
dc.date.issued 2010-05
dc.identifier.citation Van Heerden, C, Kleynhans, N, Barnard, E and Davel, M. Pooling ASR data for closely related languages. Proceedings of the Workshop on Spoken Languages Technologies for Under-Resourced Languages (SLTU 2010), Penang, Malaysia, May 2010 en_US
dc.identifier.isbn 978-967-5417-75-7
dc.identifier.uri http://www.mica.edu.vn/sltu-2010/proceedings/Proceedings%20of%20the%202nd%20International%20Workshop%20on%20Spoken%20Languages%20Technologies%20for%20Under-resourced%20Languages.pdf
dc.identifier.uri http://hdl.handle.net/10204/5974
dc.description Proceedings of the Workshop on Spoken Languages Technologies for Under-Resourced Languages (SLTU 2010), Penang, Malaysia, May 2010 en_US
dc.description.abstract We describe several experiments that were conducted to assess the viability of data pooling as a means to improve speech-recognition performance for under-resourced languages. Two groups of closely related languages from the Southern Bantu language family were studied, and our tests involved phoneme recognition on telephone speech using standard tied-triphone Hidden Markov Models. Approximately 6 to 11 hours of speech from around 170 speakers was available for training in each language. We find that useful improvements in recognition accuracy can be achieved when pooling data from languages that are highly similar, with two hours of data from a closely related language being approximately equivalent to one hour of data from the target language in the best case. However, the benefit decreases rapidly as languages become slightly more distant, and is also expected to decrease when larger corpora are available. Our results suggest that similarities in triphone frequencies are the most accurate predictor of the performance of language pooling in the conditions studied here. en_US
dc.language.iso en en_US
dc.publisher School of Computer Sciences, Universiti Sains Malaysia en_US
dc.subject Speech recognition en_US
dc.subject Data pooling en_US
dc.subject Under-resourced languages en_US
dc.title Pooling ASR data for closely related languages en_US
dc.type Presentation en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search ResearchSpace


Advanced Search

Browse

My Account