Click to open the HelpDesk interface
AECE - Front page banner

Menu:


FACTS & FIGURES

JCR Impact Factor: 0.650
JCR 5-Year IF: 0.639
Issues per year: 4
Current issue: Aug 2019
Next issue: Nov 2019
Avg review time: 71 days


PUBLISHER

Stefan cel Mare
University of Suceava
Faculty of Electrical Engineering and
Computer Science
13, Universitatii Street
Suceava - 720229
ROMANIA

Print ISSN: 1582-7445
Online ISSN: 1844-7600
WorldCat: 643243560
doi: 10.4316/AECE


TRAFFIC STATS

2,385,935 unique visits
618,907 downloads
Since November 1, 2009



Robots online now
SemrushBot
Googlebot


SJR SCImago RANK

SCImago Journal & Country Rank




TEXT LINKS

Anycast DNS Hosting
MOST RECENT ISSUES

 Volume 19 (2019)
 
     »   Issue 3 / 2019
 
     »   Issue 2 / 2019
 
     »   Issue 1 / 2019
 
 
 Volume 18 (2018)
 
     »   Issue 4 / 2018
 
     »   Issue 3 / 2018
 
     »   Issue 2 / 2018
 
     »   Issue 1 / 2018
 
 
 Volume 17 (2017)
 
     »   Issue 4 / 2017
 
     »   Issue 3 / 2017
 
     »   Issue 2 / 2017
 
     »   Issue 1 / 2017
 
 
 Volume 16 (2016)
 
     »   Issue 4 / 2016
 
     »   Issue 3 / 2016
 
     »   Issue 2 / 2016
 
     »   Issue 1 / 2016
 
 
  View all issues  




SAMPLE ARTICLES

Noise Minimization in CMOS Current Mode Circuits That Employ Differential Input Stage, YESIL, A., OZENLI, D., ARSLAN, E., KACAR, F.
Issue 2/2016

AbstractPlus

Maximum Entropy Principle in Image Restoration, PETROVICI, M.-A., DAMIAN, C., COLTUC, D.
Issue 2/2018

AbstractPlus

Comparison of Cepstral Normalization Techniques in Whispered Speech Recognition, GROZDIC, D., JOVICIC, S., SUMARAC PAVLOVIC, D., GALIC, J., MARKOVIC, B.
Issue 1/2017

AbstractPlus

Rule-Based Turkish Text Summarizer (RB-TTS), BIRANT, C. C., AKTAS, O.
Issue 3/2018

AbstractPlus

A Novel Approach for the Prediction of Conversion from Mild Cognitive Impairment to Alzheimer's disease using MRI Images, AYUB, A., FARHAN, S., FAHIEM, M. A., TAUSEEF, H.
Issue 2/2017

AbstractPlus

PAELib: A VHDL Library for Area and Power Dissipation Estimation of CMOS Logic Circuits, KIREI, B. S., CHEREJA, V.-I.-M., HINTEA, S., TOPA, M. D.
Issue 1/2019

AbstractPlus




LATEST NEWS

2019-Jun-20
Clarivate Analytics published the InCites Journal Citations Report for 2018. The JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.650, and the JCR 5-Year Impact Factor is 0.639.

2018-May-31
Starting today, the minimum number a pages for a paper is 8, so all submitted papers should have 8, 10 or 12 pages. No exceptions will be accepted.

2018-Jun-27
Clarivate Analytics published the InCites Journal Citations Report for 2017. The JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.699, and the JCR 5-Year Impact Factor is 0.674.

2017-Jun-14
Thomson Reuters published the Journal Citations Report for 2016. The JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.595, and the JCR 5-Year Impact Factor is 0.661.

Read More »


    
 

  2/2018 - 13

Combination of Long-Term and Short-Term Features for Age Identification from Voice

BUYUK, O. See more information about BUYUK, O. on SCOPUS See more information about BUYUK, O. on IEEExplore See more information about BUYUK, O. on Web of Science, ARSLAN, M. L. See more information about ARSLAN, M. L. on SCOPUS See more information about ARSLAN, M. L. on SCOPUS See more information about ARSLAN, M. L. on Web of Science
 
Click to see author's profile in See more information about the author on SCOPUS SCOPUS, See more information about the author on IEEE Xplore IEEE Xplore, See more information about the author on Web of Science Web of Science

Download PDF pdficon (1,172 KB) | Citation | Downloads: 341 | Views: 2,464

Author keywords
feature extraction, Gaussian mixture model, neural networks, speech processing, support vector machines

References keywords
processing(20), speaker(19), speech(16), recognition(14), signal(13), language(12), deep(9), verification(8), neural(8), vector(7)
No common words between the references section and the paper title.

About this article
Date of Publication: 2018-05-31
Volume 18, Issue 2, Year 2018, On page(s): 101 - 108
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2018.02013
Web of Science Accession Number: 000434245000013
SCOPUS ID: 85047853422

Abstract
Quick view
Full text preview
In this paper, we propose to use Gaussian mixture model (GMM) supervectors in a feed-forward deep neural network (DNN) for age identification from voice. The GMM is trained with short-term mel-frequency cepstral coefficients (MFCC). The proposed GMM/DNN method is compared with a feed-forward DNN and a recurrent neural network (RNN) in which the MFCC features are directly used. We also make a comparison with the classical GMM and GMM/support vector machine (SVM) methods. Baseline results are obtained with a set of long-term features which are commonly used for age identification in previous studies. A feed-forward DNN and an SVM are trained using the long term features. All the systems are tested using a speech database which consists of 228 female and 156 male speakers. We define three age classes for each gender; young, adult and senior. In the experiments, the proposed GMM/DNN significantly outperforms all the other DNN types. Its performance is only comparable to the GMM/SVM method. On the other hand, experimental results show that age identification performance is significantly improved when the decisions of the short-term and long-term systems are combined together. We obtain approximately 4% absolute improvement with the combination compared to the best standalone system.


References | Cited By  «-- Click to see who has cited this paper

[1] D. A. Reynolds, T. F. Quatieri, R. B. Dunn, "Speaker verification using adapted Gaussian mixture models," Digital Signal Processing, vol. 10 (1-3), pp. 19-41, 2000.
[CrossRef] [Web of Science Times Cited 2231] [SCOPUS Times Cited 3137]


[2] W. M. Campbell, D. E. Sturim, D. A. Reynolds, "Support vector machines using GMM supervectors for speaker verification," IEEE Signal Processing Letters, vol. 13 (5), pp. 308-311, 2006.
[CrossRef] [Web of Science Times Cited 537] [SCOPUS Times Cited 740]


[3] P. Kenny, P. Ouellet, N. Dehak, V. Gupta, P. Dumouchel, "A study of inter-speaker variability in speaker verification," IEEE Transactions on Audio Speech and Language Processing, vol. 16 (5), pp. 980-988, 2008.
[CrossRef] [Web of Science Times Cited 355] [SCOPUS Times Cited 481]


[4] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel , P. Ouellet, "Front-end factor analysis for speaker verification," IEEE Transactions on Audio Speech and Language Processing, vol. 19 (4), pp. 788-798, 2011.
[CrossRef] [Web of Science Times Cited 1560] [SCOPUS Times Cited 2009]


[5] P. Kenny "A small footprint i-vector extractor," in The Speaker and Language Recognition Workshop (ODYSSEY), Singapore, pp. 1-6, 25-28 June 2012.

[6] P. Kenny, "Bayesian speaker verification with heavy-tailed priors," in The Speaker and Language Recognition Workshop (ODYSSEY), Brno, Czech Republic, pp. 014, 28 June-1 July 2010.

[7] S. J. D. Prince, J. H. Elder, "Probabilistic linear discriminant analysis for inferences about identity," in IEEE International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, pp. 1-8, 14-20 October 2007.
[CrossRef]1


[8] G. E. Hinton, S. Osindero, Y. The, "A fast learning algorithm for deep belief nets," Neural Computation, vol. 18, pp. 1527-1554, 2006.
[CrossRef] [Web of Science Times Cited 5309] [SCOPUS Times Cited 7160]


[9] L. Deng, D. Yu, "Deep learning methods and applications," Foundations and Trends in Signal Processing, vol. 7 (3-4), pp. 197-387, 2013.
[CrossRef] [SCOPUS Times Cited 821]


[10] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, B. Kingsbury, "Deep neural networks for acoustic modeling in speech recognition," IEEE Signal Processing Magazine, vol. 29 (6), pp. 82-97, 2012.
[CrossRef] [Web of Science Times Cited 3092] [SCOPUS Times Cited 4132]


[11] F. Richardson, D. A. Reynolds, N. Dehak, "Deep neural network approaches to speaker and language recognition," IEEE Signal Processing Letters, vol. 22 (10), pp. 1671-1675, 2012.
[CrossRef] [Web of Science Times Cited 130] [SCOPUS Times Cited 179]


[12] H. Zen, A. Senior, M. Schuster, "Statistical parametric speech synthesis using deep neural networks," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 7962-7966, 26-31 May 2013.
[CrossRef] [SCOPUS Times Cited 384]


[13] I. J. Tashev, Z. Q. Wang, K. Godin, "Speech emotion recognition based on Gaussian mixture models and deep neural networks", in Information Theory and Applications Workshop (ITA), February 2017.
[CrossRef] [SCOPUS Times Cited 10]


[14] C. Zhang, C. Yu, J. H. L. Hansen, "An investigation of deep learning frameworks for speaker veri?cation anti-spoo?ng," IEEE Journal of Selected Topics in Signal Processing, vol. 11 (4), pp. 684-694, 2017.
[CrossRef] [Web of Science Times Cited 21] [SCOPUS Times Cited 36]


[15] S. B. Davis, P. Mermelstein, "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences", IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 28 (4), pp. 357-366, 1980.
[CrossRef] [Web of Science Times Cited 1955] [SCOPUS Times Cited 2734]


[16] J. Makhoul, "Linear prediction: A tutorial review", Proceeding of the IEEE, vol. 63 (4), pp. 561-580, 1975.
[CrossRef] [Web of Science Times Cited 1994] [SCOPUS Times Cited 2502]


[17] F. Itakura, "Minimum prediction residual principle applied to speech recognition", IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 23 (1), pp. 67-72, 1975.
[CrossRef] [Web of Science Times Cited 641] [SCOPUS Times Cited 897]


[18] D. A. Reynolds, W. Andrews, J. Campbell, J. Navratil, B. Peskin, A. Adami, Q. Jin, D. Klusacek, J. Abramson, R. Mihaescu, J. Godfrey, D. Jones, B. Xiang, "The SuperSID project: Exploiting high-level information for high-accuracy speaker recognition," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vancouver, Canada, 6-10 April 2003.
[CrossRef]


[19] B. Yegnanarayana, S. R. M. Prasanna, J. M. Zachariah, C.S. Gupta, "Combining evidence from source, suprasegmental and spectral features for a fixed-text speaker verification system," IEEE Transactions on Audio Speech and Language Processing, vol. 13 (4), pp. 575-582, 2005.
[CrossRef] [Web of Science Times Cited 66] [SCOPUS Times Cited 87]


[20] F. Metze, J. Ajmera, R. Englert, U. Bub, F. Burkhardt, J. Stegmann, C. Muller, R. Huber, B. Andrassy, J.G. Bauer, B. Little, "Comparison of four approaches to age and gender recognition for telephone applications," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Hawaii, Honolulu, USA, 16-20 April 2007.
[CrossRef] [SCOPUS Times Cited 93]


[21] H. Meinedo, I. Trancoso, "Age and gender classification using fusion of acoustic and prosodic features," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010.

[22] T. Bocklet, A. Maier, J. G. Bauer, F. Burkhardt, E. Noth, "Age and gender recognition for telephone applications based on GMM supervectors and support vector machines," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Las Vegas, USA, 31 March-4 April 2008.
[CrossRef] [Web of Science Times Cited 50] [SCOPUS Times Cited 77]


[23] B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. Mueller, S. Narayanan, "The Interspeech 2010 paralinguistic challenge," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010.

[24] B. E. Kingsbury, N. Morgan, S. Greenberg, "Robust speech recognition using the modulation spectrogram," Speech Comunication, vol. 25, pp. 117-132, 1998.
[CrossRef] [Web of Science Times Cited 124] [SCOPUS Times Cited 168]


[25] M. Feld, F. Burkhardt, C. Müller, "Automatic speaker age and gender recognition in the car for tailoring dialog and mobile services," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010.

[26] M. Li, K.J. Han, S. Narayanan, "Automatic speaker age and gender recognition using acoustic and prosodic level information fusion," Computer Speech and Language, vol. 27 (1), pp. 151-167, 2013.
[CrossRef] [Web of Science Times Cited 84] [SCOPUS Times Cited 99]


[27] J. Grzybowska, S. Kacprzak, "Speaker age classification and regression using i-vectors," in International Conference on Spoken Language Processing (INTERSPEECH), San Francisco, California, USA, 8-12 September 2016.
[CrossRef] [Web of Science Times Cited 6] [SCOPUS Times Cited 10]


[28] Z. Qawaqneh, A. A. Mallouh, B. D. Barkana, "Deep neural network framework and transformed MFCCs for speaker's age and gender classification," Knowledge Based Systems, vol. 115, pp. 5-14, 2017.
[CrossRef] [Web of Science Times Cited 18] [SCOPUS Times Cited 30]


[29] F. Eyben, M. Wöllmer, B. Schuller, "Opensmile: the Munich versatile and fast open-source audio feature extractor," in ACM International Conference on Multimedia, Firenze, Italy, 25-29 October 2010.
[CrossRef] [SCOPUS Times Cited 931]


[30] B. E. Boser, I. Guyon, V. Vapnik, "A training algorithm for optimal margin classi?ers," in ACM Workshop on Computational Learning Theory, Pittsburgh, USA, pp. 144-152, 27-29 July 1992.
[CrossRef]


[31] C. Cortes, V. Vapnik, "Support-vector networks," Machine Learning, vol. 20 (3), pp. 273-297, 1995.
[CrossRef] [Web of Science Times Cited 17458]


[32] Y. Bengio, "Learning deep architectures for AI," Foundations and Trends in Machine Learning, vol. 2(1), pp. 1-127, 2009.
[CrossRef] [SCOPUS Times Cited 4060]


[33] O. Buyuk, "Sentence-HMM state-based i-vector/PLDA modelling for improved performance in text dependent single utterance speaker verification," IET Signal Processing, vol. 10 (8), pp. 918-923, 2016.
[CrossRef] [Web of Science Times Cited 6] [SCOPUS Times Cited 8]


[34] J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy of Sciences of the USA, vol. 79 (8), pp. 2554-2558, April 1982.
[CrossRef] [Web of Science Times Cited 8523] [SCOPUS Times Cited 9881]


[35] S. Hochreiter, "Untersuchungen zu dynamischen neuronalen Netzen," Diploma thesis 1991, TU Munich.

[36] S. Hochreiter, J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9 (8), pp. 1735-1780, November 1997.
[CrossRef] [Web of Science Times Cited 8061] [SCOPUS Times Cited 12500]


[37] Y. Linde, A. Buzo, R. Gray, "An algorithm for vector quantizer design," IEEE Transactions on Communications, vol. 28 (1), pp. 84-95, 1980.
[CrossRef] [Web of Science Times Cited 3803] [SCOPUS Times Cited 5154]


[38] R. Blouet, C. Mokbel, H. Mokbel, E.S. Soto, G. Chollet, H. Greige, "Becars: a free software for speaker verification," in The Speaker and Language Recognition Workshop (ODYSSEY), Toledo, Spain. pp. 145-148, 31 May - 4 June 2004.

[39] C. C. Chang, C. J. Lin, "LIBSVM: A library for support vector machines," ACM Transactions on Intelligent Systems and Technology, vol. 2 (3), pp. 27:1-27, 2011.
[CrossRef] [Web of Science Times Cited 15305] [SCOPUS Times Cited 18393]


[40] F. Chollet, Keras. Github repository 2015. https://github.com/fchollet/keras.

[41] R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, A. Belikov, A. Belopolsky, et. al. "Theano: A Python framework for fast computation of mathematical expressions," arXiv e-prints 2016.



References Weight

Web of Science® Citations for all references: 71,329 TCR
SCOPUS® Citations for all references: 76,713 TCR

Web of Science® Average Citations per reference: 1,698 ACR
SCOPUS® Average Citations per reference: 1,827 ACR

TCR = Total Citations for References / ACR = Average Citations per Reference

We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more

Citations for references updated on 2019-11-16 03:39 in 216 seconds.




Note1: Web of Science® is a registered trademark of Clarivate Analytics.
Note2: SCOPUS® is a registered trademark of Elsevier B.V.
Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site.

Copyright ©2001-2019
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania


All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.

Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.

Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.




Website loading speed and performance optimization powered by: