Please use this identifier to cite or link to this item:
https://idr.l1.nitk.ac.in/jspui/handle/123456789/9784
Title: | Acoustic-phonetic feature based Kannada dialect identification from vowel sounds |
Authors: | Chittaragi, N.B. Koolagudi, S.G. |
Issue Date: | 2019 |
Citation: | International Journal of Speech Technology, 2019, Vol.22, 4, pp.1099-1113 |
Abstract: | In this paper, a dialect identification system is proposed for Kannada language using vowels sounds. Dialectal cues are characterized through acoustic parameters such as formant frequencies (F1 F3), and prosodic features [energy, pitch (F0), and duration]. For this purpose, a vowel dataset is collected from native speakers of Kannada belonging to different dialectal regions. Global features representing frame level global statistics such as mean, minimum, maximum, standard deviation and variance are extracted from vowel sounds. Local features representing temporal dynamic properties from the contour level are derived from the steady-state vowel region. Three decision tree-based ensemble algorithms, namely random forest, extreme random forest (ERF) and extreme gradient boosting algorithms are used for classification. Performance of both global and local features is evaluated individually. Further, the significance of every feature in dialect discrimination is analyzed using single factor-ANOVA (analysis of variances) tests. Global features with ERF ensemble model has shown a better average dialect identification performance of around 76%. Also, the contribution of every feature in dialect identification is verified. The role of duration, energy, pitch, and three formant features is found to be evidential in Kannada dialect classification. 2019, Springer Science+Business Media, LLC, part of Springer Nature. |
URI: | 10.1007/s10772-019-09646-1 http://idr.nitk.ac.in/jspui/handle/123456789/9784 |
Appears in Collections: | 1. Journal Articles |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.