Please use this identifier to cite or link to this item: https://idr.l2.nitk.ac.in/jspui/handle/123456789/11569
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAfshan, A.
dc.contributor.authorGhosh, P.K.
dc.date.accessioned2020-03-31T08:35:19Z-
dc.date.available2020-03-31T08:35:19Z-
dc.date.issued2015
dc.identifier.citationSpeech Communication, 2015, Vol.66, , pp.1-16en_US
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/11569-
dc.description.abstractIn subject-independent acoustic-to-articulatory inversion, the articulatory kinematics of a test subject are estimated assuming that the training corpus does not include data from the test subject. The training corpus in subject-independent inversion (SII) is formed with acoustic and articulatory kinematics data and the acoustic mismatch between training and test subjects is then estimated by an acoustic normalization using acoustic data drawn from a large pool of speakers called generic acoustic space (GAS). In this work, we focus on improving the SII performance through better acoustic normalization and adaptation. We propose unsupervised and several supervised ways of clustering GAS for acoustic normalization. We perform an adaptation of acoustic models of GAS using the acoustic data of the training and test subjects in SII. It is found that SII performance significantly improves (?25% relative on average) over the subject-dependent inversion when the acoustic clusters in GAS correspond to phonetic units (or states of 3-state phonetic HMMs) and when the acoustic model built on GAS is adapted to training and test subjects while optimizing the inversion criterion. 2014 Elsevier B.V. All rights reserved.en_US
dc.titleImproved subject-independent acoustic-to-articulatory inversionen_US
dc.typeArticleen_US
Appears in Collections:1. Journal Articles

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.