Please use this identifier to cite or link to this item:
https://idr.l2.nitk.ac.in/jspui/handle/123456789/11569
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Afshan, A. | |
dc.contributor.author | Ghosh, P.K. | |
dc.date.accessioned | 2020-03-31T08:35:19Z | - |
dc.date.available | 2020-03-31T08:35:19Z | - |
dc.date.issued | 2015 | |
dc.identifier.citation | Speech Communication, 2015, Vol.66, , pp.1-16 | en_US |
dc.identifier.uri | http://idr.nitk.ac.in/jspui/handle/123456789/11569 | - |
dc.description.abstract | In subject-independent acoustic-to-articulatory inversion, the articulatory kinematics of a test subject are estimated assuming that the training corpus does not include data from the test subject. The training corpus in subject-independent inversion (SII) is formed with acoustic and articulatory kinematics data and the acoustic mismatch between training and test subjects is then estimated by an acoustic normalization using acoustic data drawn from a large pool of speakers called generic acoustic space (GAS). In this work, we focus on improving the SII performance through better acoustic normalization and adaptation. We propose unsupervised and several supervised ways of clustering GAS for acoustic normalization. We perform an adaptation of acoustic models of GAS using the acoustic data of the training and test subjects in SII. It is found that SII performance significantly improves (?25% relative on average) over the subject-dependent inversion when the acoustic clusters in GAS correspond to phonetic units (or states of 3-state phonetic HMMs) and when the acoustic model built on GAS is adapted to training and test subjects while optimizing the inversion criterion. 2014 Elsevier B.V. All rights reserved. | en_US |
dc.title | Improved subject-independent acoustic-to-articulatory inversion | en_US |
dc.type | Article | en_US |
Appears in Collections: | 1. Journal Articles |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.