Click to open the HelpDesk interface
AECE - Front page banner

Menu:


FACTS & FIGURES

JCR Impact Factor: 0.595
JCR 5-Year IF: 0.661
Issues per year: 4
Current issue: May 2018
Next issue: Aug 2018
Avg review time: 106 days


PUBLISHER

Stefan cel Mare
University of Suceava
Faculty of Electrical Engineering and
Computer Science
13, Universitatii Street
Suceava - 720229
ROMANIA

Print ISSN: 1582-7445
Online ISSN: 1844-7600
WorldCat: 643243560
doi: 10.4316/AECE


TRAFFIC STATS

1,967,506 unique visits
536,396 downloads
Since November 1, 2009



Robots online now
Googlebot


SJR SCImago RANK

SCImago Journal & Country Rank


SEARCH ENGINES

aece.ro - Google Pagerank




TEXT LINKS

Anycast DNS Hosting
MOST RECENT ISSUES

 Volume 18 (2018)
 
     »   Issue 2 / 2018
 
     »   Issue 1 / 2018
 
 
 Volume 17 (2017)
 
     »   Issue 4 / 2017
 
     »   Issue 3 / 2017
 
     »   Issue 2 / 2017
 
     »   Issue 1 / 2017
 
 
 Volume 16 (2016)
 
     »   Issue 4 / 2016
 
     »   Issue 3 / 2016
 
     »   Issue 2 / 2016
 
     »   Issue 1 / 2016
 
 
 Volume 15 (2015)
 
     »   Issue 4 / 2015
 
     »   Issue 3 / 2015
 
     »   Issue 2 / 2015
 
     »   Issue 1 / 2015
 
 
  View all issues  








LATEST NEWS

2017-Jun-14
Thomson Reuters published the Journal Citations Report for 2016. The JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.595, and the JCR 5-Year Impact Factor is 0.661.

2017-Apr-04
We have the confirmation Advances in Electrical and Computer Engineering will be included in the EBSCO database.

2017-Jan-30
We have the confirmation Advances in Electrical and Computer Engineering will be included in the Gale database.

Read More »


    
 

  2/2018 - 13

Combination of Long-Term and Short-Term Features for Age Identification from Voice

BUYUK, O. See more information about BUYUK, O. on SCOPUS See more information about BUYUK, O. on IEEExplore See more information about BUYUK, O. on Web of Science, ARSLAN, M. L. See more information about ARSLAN, M. L. on SCOPUS See more information about ARSLAN, M. L. on SCOPUS See more information about ARSLAN, M. L. on Web of Science
 
Click to see author's profile on See more information about the author on SCOPUS SCOPUS, See more information about the author on IEEE Xplore IEEE Xplore, See more information about the author on Web of Science Web of Science

Download PDF pdficon (1,172 KB) | Citation | Downloads: 50 | Views: 104

Author keywords
feature extraction, Gaussian mixture model, neural networks, speech processing, support vector machines

References keywords
processing(20), speaker(19), speech(16), recognition(14), signal(13), language(12), deep(9), verification(8), neural(8), vector(7)
No common words between the references section and the paper title.

About this article
Date of Publication: 2018-05-31
Volume 18, Issue 2, Year 2018, On page(s): 101 - 108
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2018.02013
SCOPUS ID: 85047853422

Abstract
Quick view
Full text preview
In this paper, we propose to use Gaussian mixture model (GMM) supervectors in a feed-forward deep neural network (DNN) for age identification from voice. The GMM is trained with short-term mel-frequency cepstral coefficients (MFCC). The proposed GMM/DNN method is compared with a feed-forward DNN and a recurrent neural network (RNN) in which the MFCC features are directly used. We also make a comparison with the classical GMM and GMM/support vector machine (SVM) methods. Baseline results are obtained with a set of long-term features which are commonly used for age identification in previous studies. A feed-forward DNN and an SVM are trained using the long term features. All the systems are tested using a speech database which consists of 228 female and 156 male speakers. We define three age classes for each gender; young, adult and senior. In the experiments, the proposed GMM/DNN significantly outperforms all the other DNN types. Its performance is only comparable to the GMM/SVM method. On the other hand, experimental results show that age identification performance is significantly improved when the decisions of the short-term and long-term systems are combined together. We obtain approximately 4% absolute improvement with the combination compared to the best standalone system.


References | Cited By  «-- Click to see who has cited this paper

[1] D. A. Reynolds, T. F. Quatieri, R. B. Dunn, "Speaker verification using adapted Gaussian mixture models," Digital Signal Processing, vol. 10 (1-3), pp. 19-41, 2000.
[CrossRef] [Web of Science Times Cited 1993] [SCOPUS Times Cited 2742]


[2] W. M. Campbell, D. E. Sturim, D. A. Reynolds, "Support vector machines using GMM supervectors for speaker verification," IEEE Signal Processing Letters, vol. 13 (5), pp. 308-311, 2006.
[CrossRef] [Web of Science Times Cited 459] [SCOPUS Times Cited 639]


[3] P. Kenny, P. Ouellet, N. Dehak, V. Gupta, P. Dumouchel, "A study of inter-speaker variability in speaker verification," IEEE Transactions on Audio Speech and Language Processing, vol. 16 (5), pp. 980-988, 2008.
[CrossRef] [Web of Science Times Cited 309] [SCOPUS Times Cited 410]


[4] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel , P. Ouellet, "Front-end factor analysis for speaker verification," IEEE Transactions on Audio Speech and Language Processing, vol. 19 (4), pp. 788-798, 2011.
[CrossRef] [Web of Science Times Cited 1010] [SCOPUS Times Cited 1326]


[5] P. Kenny "A small footprint i-vector extractor," in The Speaker and Language Recognition Workshop (ODYSSEY), Singapore, pp. 1-6, 25-28 June 2012.

[6] P. Kenny, "Bayesian speaker verification with heavy-tailed priors," in The Speaker and Language Recognition Workshop (ODYSSEY), Brno, Czech Republic, pp. 014, 28 June-1 July 2010.

[7] S. J. D. Prince, J. H. Elder, "Probabilistic linear discriminant analysis for inferences about identity," in IEEE International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, pp. 1-8, 14-20 October 2007.
[CrossRef]1


[8] G. E. Hinton, S. Osindero, Y. The, "A fast learning algorithm for deep belief nets," Neural Computation, vol. 18, pp. 1527-1554, 2006.
[CrossRef] [Web of Science Times Cited 3441] [SCOPUS Times Cited 4837]


[9] L. Deng, D. Yu, "Deep learning methods and applications," Foundations and Trends in Signal Processing, vol. 7 (3-4), pp. 197-387, 2013.
[CrossRef] [SCOPUS Times Cited 354]


[10] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, B. Kingsbury, "Deep neural networks for acoustic modeling in speech recognition," IEEE Signal Processing Magazine, vol. 29 (6), pp. 82-97, 2012.
[CrossRef] [Web of Science Times Cited 1751] [SCOPUS Times Cited 2386]


[11] F. Richardson, D. A. Reynolds, N. Dehak, "Deep neural network approaches to speaker and language recognition," IEEE Signal Processing Letters, vol. 22 (10), pp. 1671-1675, 2012.
[CrossRef] [Web of Science Times Cited 64] [SCOPUS Times Cited 90]


[12] H. Zen, A. Senior, M. Schuster, "Statistical parametric speech synthesis using deep neural networks," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 7962-7966, 26-31 May 2013.
[CrossRef] [SCOPUS Times Cited 236]


[13] I. J. Tashev, Z. Q. Wang, K. Godin, "Speech emotion recognition based on Gaussian mixture models and deep neural networks", in Information Theory and Applications Workshop (ITA), February 2017.
[CrossRef] [SCOPUS Times Cited 2]


[14] C. Zhang, C. Yu, J. H. L. Hansen, "An investigation of deep learning frameworks for speaker veri?cation anti-spoo?ng," IEEE Journal of Selected Topics in Signal Processing, vol. 11 (4), pp. 684-694, 2017.
[CrossRef] [Web of Science Times Cited 6] [SCOPUS Times Cited 10]


[15] S. B. Davis, P. Mermelstein, "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences", IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 28 (4), pp. 357-366, 1980.
[CrossRef] [Web of Science Times Cited 1651] [SCOPUS Times Cited 2347]


[16] J. Makhoul, "Linear prediction: A tutorial review", Proceeding of the IEEE, vol. 63 (4), pp. 561-580, 1975.
[CrossRef] [Web of Science Times Cited 1868] [SCOPUS Times Cited 2288]


[17] F. Itakura, "Minimum prediction residual principle applied to speech recognition", IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 23 (1), pp. 67-72, 1975.
[CrossRef] [Web of Science Times Cited 578] [SCOPUS Times Cited 774]


[18] D. A. Reynolds, W. Andrews, J. Campbell, J. Navratil, B. Peskin, A. Adami, Q. Jin, D. Klusacek, J. Abramson, R. Mihaescu, J. Godfrey, D. Jones, B. Xiang, "The SuperSID project: Exploiting high-level information for high-accuracy speaker recognition," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vancouver, Canada, 6-10 April 2003.
[CrossRef]


[19] B. Yegnanarayana, S. R. M. Prasanna, J. M. Zachariah, C.S. Gupta, "Combining evidence from source, suprasegmental and spectral features for a fixed-text speaker verification system," IEEE Transactions on Audio Speech and Language Processing, vol. 13 (4), pp. 575-582, 2005.
[CrossRef] [Web of Science Times Cited 55] [SCOPUS Times Cited 77]


[20] F. Metze, J. Ajmera, R. Englert, U. Bub, F. Burkhardt, J. Stegmann, C. Muller, R. Huber, B. Andrassy, J.G. Bauer, B. Little, "Comparison of four approaches to age and gender recognition for telephone applications," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Hawaii, Honolulu, USA, 16-20 April 2007.
[CrossRef] [SCOPUS Times Cited 77]


[21] H. Meinedo, I. Trancoso, "Age and gender classification using fusion of acoustic and prosodic features," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010.

[22] T. Bocklet, A. Maier, J. G. Bauer, F. Burkhardt, E. Noth, "Age and gender recognition for telephone applications based on GMM supervectors and support vector machines," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Las Vegas, USA, 31 March-4 April 2008.
[CrossRef] [Web of Science Times Cited 35] [SCOPUS Times Cited 60]


[23] B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. Mueller, S. Narayanan, "The Interspeech 2010 paralinguistic challenge," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010.

[24] B. E. Kingsbury, N. Morgan, S. Greenberg, "Robust speech recognition using the modulation spectrogram," Speech Comunication, vol. 25, pp. 117-132, 1998.
[CrossRef] [Web of Science Times Cited 112] [SCOPUS Times Cited 153]


[25] M. Feld, F. Burkhardt, C. Müller, "Automatic speaker age and gender recognition in the car for tailoring dialog and mobile services," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010.

[26] M. Li, K.J. Han, S. Narayanan, "Automatic speaker age and gender recognition using acoustic and prosodic level information fusion," Computer Speech and Language, vol. 27 (1), pp. 151-167, 2013.
[CrossRef] [Web of Science Times Cited 62] [SCOPUS Times Cited 69]


[27] J. Grzybowska, S. Kacprzak, "Speaker age classification and regression using i-vectors," in International Conference on Spoken Language Processing (INTERSPEECH), San Francisco, California, USA, 8-12 September 2016.
[CrossRef] [Web of Science Times Cited 3] [SCOPUS Times Cited 2]


[28] Z. Qawaqneh, A. A. Mallouh, B. D. Barkana, "Deep neural network framework and transformed MFCCs for speaker's age and gender classification," Knowledge Based Systems, vol. 115, pp. 5-14, 2017.
[CrossRef] [Web of Science Times Cited 5] [SCOPUS Times Cited 7]


[29] F. Eyben, M. Wöllmer, B. Schuller, "Opensmile: the Munich versatile and fast open-source audio feature extractor," in ACM International Conference on Multimedia, Firenze, Italy, 25-29 October 2010.
[CrossRef] [SCOPUS Times Cited 674]


[30] B. E. Boser, I. Guyon, V. Vapnik, "A training algorithm for optimal margin classi?ers," in ACM Workshop on Computational Learning Theory, Pittsburgh, USA, pp. 144-152, 27-29 July 1992.
[CrossRef]


[31] C. Cortes, V. Vapnik, "Support-vector networks," Machine Learning, vol. 20 (3), pp. 273-297, 1995.
[CrossRef] [Web of Science Times Cited 13225]


[32] Y. Bengio, "Learning deep architectures for AI," Foundations and Trends in Machine Learning, vol. 2(1), pp. 1-127, 2009.
[CrossRef] [SCOPUS Times Cited 2783]


[33] O. Buyuk, "Sentence-HMM state-based i-vector/PLDA modelling for improved performance in text dependent single utterance speaker verification," IET Signal Processing, vol. 10 (8), pp. 918-923, 2016.
[CrossRef] [Web of Science Times Cited 2] [SCOPUS Times Cited 4]


[34] J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy of Sciences of the USA, vol. 79 (8), pp. 2554-2558, April 1982.
[CrossRef] [Web of Science Times Cited 7911] [SCOPUS Times Cited 9028]


[35] S. Hochreiter, "Untersuchungen zu dynamischen neuronalen Netzen," Diploma thesis 1991, TU Munich.

[36] S. Hochreiter, J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9 (8), pp. 1735-1780, November 1997.
[CrossRef] [Web of Science Times Cited 2520] [SCOPUS Times Cited 4267]


[37] Y. Linde, A. Buzo, R. Gray, "An algorithm for vector quantizer design," IEEE Transactions on Communications, vol. 28 (1), pp. 84-95, 1980.
[CrossRef] [Web of Science Times Cited 3638] [SCOPUS Times Cited 4868]


[38] R. Blouet, C. Mokbel, H. Mokbel, E.S. Soto, G. Chollet, H. Greige, "Becars: a free software for speaker verification," in The Speaker and Language Recognition Workshop (ODYSSEY), Toledo, Spain. pp. 145-148, 31 May - 4 June 2004.

[39] C. C. Chang, C. J. Lin, "LIBSVM: A library for support vector machines," ACM Transactions on Intelligent Systems and Technology, vol. 2 (3), pp. 27:1-27, 2011.
[CrossRef] [Web of Science Times Cited 11185] [SCOPUS Times Cited 14501]


[40] F. Chollet, Keras. Github repository 2015. https://github.com/fchollet/keras.

[41] R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, A. Belikov, A. Belopolsky, et. al. "Theano: A Python framework for fast computation of mathematical expressions," arXiv e-prints 2016.



References Weight

Web of Science® Citations for all references: 51,883 TCR
SCOPUS® Citations for all references: 55,011 TCR

Web of Science® Average Citations per reference: 1,235 ACR
SCOPUS® Average Citations per reference: 1,310 ACR

TCR = Total Citations for References / ACR = Average Citations per Reference

We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more

Citations for references updated on 2018-06-23 02:19 in 238 seconds.




Note1: Web of Science® is a registered trademark of Clarivate Analytics.
Note2: SCOPUS® is a registered trademark of Elsevier B.V.
Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site.

Copyright ©2001-2018
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania


All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.

Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.

Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.




Website loading speed and performance optimization powered by: