针对非对称语音库情况下的语音转换,提出了一种有效的基于模型自适应的语音转换方法。首先,通过最大后验概率(Maximum A Posteriori,MAP)方法从背景模型分别自适应训练得到源说话人和目标说话人的模型;然后,通过说话人模型中的均值向量训练得到频谱特征的转换函数;并进一步与传统的INCA转换方法相结合,提出了基于模型自适应的INCA语音转换方法,有效实现了源说话人频谱特征向目标说话人频谱特征的转换。通过客观测试和主观测听实验对提出的方法进行评价,实验结果表明,与INCA语音转换方法相比,本文提出的方法可以取得更低的倒谱失真、更高的语音感知质量和目标倾向度;同时更接近传统基于对称语音库的高斯混合模型(Gaussian Mixture Model,GMM)的语音转换方法的效果。
To solve the problem of mismatching features in an experimental database, which is a key technique in the field of cross-corpus speech emotion recognition, an auditory attention model based on Chirplet is proposed for feature extraction.First, in order to extract the spectra features, the auditory attention model is employed for variational emotion features detection. Then, the selective attention mechanism model is proposed to extract the salient gist features which showtheir relation to the expected performance in cross-corpus testing.Furthermore, the Chirplet time-frequency atoms are introduced to the model. By forming a complete atom database, the Chirplet can improve the spectrum feature extraction including the amount of information. Samples from multiple databases have the characteristics of multiple components. Hereby, the Chirplet expands the scale of the feature vector in the timefrequency domain. Experimental results show that, compared to the traditional feature model, the proposed feature extraction approach with the prototypical classifier has significant improvement in cross-corpus speech recognition. In addition, the proposed method has better robustness to the inconsistent sources of the training set and the testing set.