Please use this identifier to cite or link to this item:
https://idr.l2.nitk.ac.in/jspui/handle/123456789/7433
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chittaragi, N.B. | |
dc.contributor.author | Limaye, A. | |
dc.contributor.author | Chandana, N.T. | |
dc.contributor.author | Annappa, B. | |
dc.contributor.author | Koolagudi, S.G. | |
dc.date.accessioned | 2020-03-30T09:59:05Z | - |
dc.date.available | 2020-03-30T09:59:05Z | - |
dc.date.issued | 2019 | |
dc.identifier.citation | Advances in Intelligent Systems and Computing, 2019, Vol.863, , pp.79-87 | en_US |
dc.identifier.uri | http://idr.nitk.ac.in/jspui/handle/123456789/7433 | - |
dc.description.abstract | This paper proposes a dialect identification system for the Kannada language. A system that can automatically identify the dialects of the language being spoken has a wide variety of applications. However, not many Automatic Speech Recognition (ASR) and dialect identification tasks are carried out in majority of the Indian languages. Further, there are only a few good quality annotated audio datasets available. In this paper, a new dataset for 5 spoken dialects of the Kannada language is introduced. Spectral and prosodic features have captured the most prominent features for recognition of Kannada dialects. Support Vector Machine (SVM) and neural networks algorithms are used for modeling text-independent recognition system. A neural network model that attempts for identification dialects based on sentence level cues has also been built. Hyper-parameters for SVM and neural network models are chosen using grid search. Neural network models have outperformed SVMs when complete utterances are considered. � Springer Nature Singapore Pte Ltd. 2019. | en_US |
dc.title | Automatic text-independent Kannada dialect identification system | en_US |
dc.type | Book chapter | en_US |
Appears in Collections: | 2. Conference Papers |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.