Click to open the HelpDesk interface
AECE - Front page banner

Menu:


FACTS & FIGURES

JCR Impact Factor: 1.102
JCR 5-Year IF: 0.734
Issues per year: 4
Current issue: May 2020
Next issue: Aug 2020
Avg review time: 74 days


PUBLISHER

Stefan cel Mare
University of Suceava
Faculty of Electrical Engineering and
Computer Science
13, Universitatii Street
Suceava - 720229
ROMANIA

Print ISSN: 1582-7445
Online ISSN: 1844-7600
WorldCat: 643243560
doi: 10.4316/AECE


TRAFFIC STATS

2,636,807 unique visits
670,709 downloads
Since November 1, 2009



Robots online now
BINGbot


SJR SCImago RANK

SCImago Journal & Country Rank




TEXT LINKS

Anycast DNS Hosting
MOST RECENT ISSUES

 Volume 20 (2020)
 
     »   Issue 2 / 2020
 
     »   Issue 1 / 2020
 
 
 Volume 19 (2019)
 
     »   Issue 4 / 2019
 
     »   Issue 3 / 2019
 
     »   Issue 2 / 2019
 
     »   Issue 1 / 2019
 
 
 Volume 18 (2018)
 
     »   Issue 4 / 2018
 
     »   Issue 3 / 2018
 
     »   Issue 2 / 2018
 
     »   Issue 1 / 2018
 
 
 Volume 17 (2017)
 
     »   Issue 4 / 2017
 
     »   Issue 3 / 2017
 
     »   Issue 2 / 2017
 
     »   Issue 1 / 2017
 
 
 Volume 16 (2016)
 
     »   Issue 4 / 2016
 
     »   Issue 3 / 2016
 
     »   Issue 2 / 2016
 
     »   Issue 1 / 2016
 
 
  View all issues  








LATEST NEWS

2020-Jun-29
Clarivate Analytics published the InCites Journal Citations Report for 2019. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 1.102 (1.023 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.734.

2020-Jun-11
Starting on the 15th of June 2020 we wiil introduce a new policy for reviewers. Reviewers who provide timely and substantial comments will receive a discount voucher entitling them to an APC reduction. Vouchers (worth of 25 EUR or 50 EUR, depending on the review quality) will be assigned to reviewers after the final decision of the reviewed paper is given. Vouchers issued to specific individuals are not transferable.

2019-Dec-16
Starting on the 15th of December 2019 all paper authors are required to enter their SCOPUS IDs. You may use the free SCOPUS ID lookup form to find yours in case you don't remember it.

2019-Jun-20
Clarivate Analytics published the InCites Journal Citations Report for 2018. The JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.650, and the JCR 5-Year Impact Factor is 0.639.

2018-May-31
Starting today, the minimum number a pages for a paper is 8, so all submitted papers should have 8, 10 or 12 pages. No exceptions will be accepted.

Read More »


    
 

  3/2012 - 5

 HIGH-IMPACT PAPER 

The Analysis of the FCM and WKNN Algorithms Performance for the Emotional Corpus SROL

ZBANCIOC, M. See more information about ZBANCIOC, M. on SCOPUS See more information about ZBANCIOC, M. on IEEExplore See more information about ZBANCIOC, M. on Web of Science, FERARU, S. M. See more information about FERARU, S. M. on SCOPUS See more information about FERARU, S. M. on SCOPUS See more information about FERARU, S. M. on Web of Science
 
Click to see author's profile in See more information about the author on SCOPUS SCOPUS, See more information about the author on IEEE Xplore IEEE Xplore, See more information about the author on Web of Science Web of Science

Download PDF pdficon (875 KB) | Citation | Downloads: 435 | Views: 3,143

Author keywords
emotional speech database, FCM and WKNN algorithm, recurrent coefficient, statistical parameters

References keywords
speech(20), emotion(15), recognition(11), systems(7), fuzzy(7), features(7), classification(7), teodorescu(6), emotional(5), communication(5)
Blue keywords are present in both the references section and the paper title.

About this article
Date of Publication: 2012-08-31
Volume 12, Issue 3, Year 2012, On page(s): 33 - 38
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2012.03005
Web of Science Accession Number: 000308290500005
SCOPUS ID: 84865856327

Abstract
Quick view
Full text preview
The purpose of this research is to find a set of relevant parameters for the emotion recognition. In this study we used the recordings from the emotion database SROL which is part of the project 'Voiced Sounds of Romanian Language'. The database was validated by human listeners. The recognition accuracy of the correct expressed emotion (neutral tone, joy, fury and sadness) for the entire database was 63.97%. We used for the classification of input data the Recurrent Fuzzy C-Means (FCM) and WKNN algorithms. We compared the cluster position with the statistical parameters extracted from vowels in order to establish the relevance of each parameter in the recognition of the emotions. For the extracted parameters for each vowel (mean, median and standard deviation of fundamental frequency - F0 and F1-F4 formants, jitter, and shimmer) the FCM algorithm gave satisfactory results in the phonemes recognition, but not to the emotions. For this reason we used WKNN algorithm in classification, which provided the errors around 20-30% comparing with FCM algorithm when the classification errors are around 40-50%.


References | Cited By  «-- Click to see who has cited this paper

[1] K. R. Scherer, "Vocal communication of emotion: A review of research paradigms", Speech Communication, vol. 40, pp. 227-256, 2003.
[CrossRef] [Web of Science Times Cited 708] [SCOPUS Times Cited 904]


[2] W. Hess, "Pitch determination of speech signals: algorithms and devices", Springer-Verlag, Berlin, Germany 1983.
[CrossRef]


[3] S. McGilloway, R. Cowie, E. Douglas-Cowie, S. Gielen, M. Westerdijk, S. Stroeve, "Approaching automatic recognition of emotion from voice: a rough enchmark", in Proc. of the ISCA Workshop on Speech and Emotion, Belfast, Northern Ireland, pp. 200-205, 2000.

[4] G. Klasmeyer, "An automatic description tool for timecontours and long-term average voice features in large emotional speech databases", in Proc. of ISCA Workshop on Speech and Emotion, Belfast, Northern Ireland, pp. 66-71, 2000.

[5] M. Slaney, G. McRoberts, "Baby ears: a recognition system for affective vocalization", in Proc. of ICASSP, 1998.

[6] S. Steidl, M. Levit, A. Batliner, E. Noth, H. Niemann, "Of all things the measure is man" automatic classification of emotions and inter-labeler consistency, in Proc. of ICASSP, pp. 317-320, 2005.

[7] R. O. Duda, P. E. Hart, D. G. Stork, Pattern Recognition, 2nd edition. New York, John Wiley & Sons Inc., 2001.

[8] F. Dellaert, Th. Polzin, A. Waibel, "Recognizing emotion in speech", in Proc. of ICSLP, vol. 3, pp. 1970 - 1973, 1996.

[9] Xi Li, Jidong Tao, Michael T. Johnson, J. Soltis, A. Savage, Kirsten M. Leong, John D. Newman, "Stress and emotion classification using jitter and shimmer features", In Proc. of ICASSP, pp. 1081-1084, 2007.

[10] A. Noam, "Classifying emotions in speech: a comparison of methods", in Proc. of 7th European Conference on Speech Communication and Technology, Aalborg, Denmark, pp. 127-130, 2001.

[11] H. N. Teodorescu, M. Zbancioc, M. Feraru, "The analysis of the vowel triangle variation for Romanian language depending on emotional states", in Proc. of ISSCS Conference, Romania, ISBN 978-1-4577-0201-3, pp. 331-334, 2011

[12] H. N. Teodorescu, M. Zbancioc, M. Feraru, "Statistical characteristics of the formants of the Romanian vowels in emotional states", in Proc. of the Int. Conf. on Speech Technology and Human-Computer Dialogue, Romania, ISBN 978-1-4577-0439-0, pp. 13-22, 2011

[13] H. N. Teodorescu, "Recurrent Rules-Based Fuzzy Decision-Making and Control", in Proc. of WSAS Conference, Udine, Italy, 2004.

[14] H. N. Teodorescu, "Fuzzy systems with recurrent rules in population and medical models", in Proc. of the American Conference on Applied Mathematics World Scientific and Engineering Academy and Society Stevens Point, Wisconsin, USA, ISBN: 978-960-6766-47-3, pp. 343-349, 2008.

[15] H. N. Teodorescu, "Fuzzy Systems with Recurrent Rules. A new type of fuzzy systems and applications", Intelligent Systems, pag 157-166, Editors: H.N.Teodorescu, Iaºi, România, Ed. Performantica, ISBN 973-7994-85-X, 2004.

[16] M. Zbancioc, "Recurrent fuzzy rules (Teodorescu's fuzzy systems) in economic process modeling", in Proc. of 15th International Conference on Control Systems and Computer Science, Bucuresti, România, 2005.

[17] C. M. Lee, S. Narayanan, "Emotion recognition using a data-driven fuzzy inference system", in Proc. of Eurospeech, Geneva, , pp. 157-160, 2003.

[18] M. Grimm, K. Kroschel, "Rule-based emotion classification using acoustic features", in Proc. Int. Conf. on Telemedicine and Multimedia Communication, 2005.

[19] D. Ververidis, C. Kotropoulos, I. Pitas, "Automatic emotional speech classification", in Proc. of Internat. Conf. on Acoustics, Speech and Signal Processing, Montreal, vol. 1, pp. 593-596, 2004.

[20] Valery A. Petrushin, "Emotion recognition in speech signal: experimental study, development, and application", in Proc. of the Sixth International Conference on Spoken Language Processing ICSLP 2000.

[21] Dan-Nmg Jiang, LiaHong Cai, "Speech emotion classification with the combination of statistic features and temporal features", IEEE International Conference on Multimedia and Expo (ICME), pp.1967-1970, 2004.
[CrossRef] [Web of Science Times Cited 26]


[22] Aishah AM. Razak, Mohd Hafizuddin Mohd Yusof, Ryoichi Komiya, "Towards automatic recognition of emotion in speech", pp.548-551

[23] Kuan-Chieh Huang, Yau-Hwang Kuo, "A novel objective function to optimize neural networks for emotion recognition from speech patterns", in Proc. of the second World Congress on Nature and Biologically Inspired Computing, Kitakyushu, Fukuoka, Japan, pp. 413-417, 2010

[24] Liqin Fu, Changjiang Wang, Yongmei Zhang, "A study on influence of gender on speech emotion classification", in Proc. of 2nd Int. Conference on Signal Processing Systems, pp. 534-537, 2010.
[CrossRef] [SCOPUS Times Cited 7]


[25] Ashish B. Ingale, D. S. Chaudhari, "Speech Emotion Recognition", International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-2, Issue-1, 2012.

[26] M. E. Ayadi, M. S. Kamel, F. Karray, "Survey On Speech Emotion Recognition: Features, Classification Schemes, And Databases", Pattern Recognition vol. 44, pp. 572-587, 2011.
[CrossRef] [Web of Science Times Cited 671] [SCOPUS Times Cited 904]


[27] D. Ververidis, C. Kotropoulos, "Emotional speech recognition: resources, features and methods", Elsevier Speech Communication, vol. 48, no. 9, pp. 1162-1181, 2006.
[CrossRef] [Web of Science Times Cited 443] [SCOPUS Times Cited 579]


References Weight

Web of Science® Citations for all references: 1,848 TCR
SCOPUS® Citations for all references: 2,394 TCR

Web of Science® Average Citations per reference: 68 ACR
SCOPUS® Average Citations per reference: 89 ACR

TCR = Total Citations for References / ACR = Average Citations per Reference

We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more

Citations for references updated on 2020-08-12 21:49 in 44 seconds.




Note1: Web of Science® is a registered trademark of Clarivate Analytics.
Note2: SCOPUS® is a registered trademark of Elsevier B.V.
Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site.

Copyright ©2001-2020
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania


All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.

Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.

Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.




Website loading speed and performance optimization powered by: