Recognizing emotion in speech using a multilayer perceptron

dc.contributor.advisorThill, Serge
dc.contributor.authorFeline, Pelt van
dc.date.issued2020-01-31
dc.description.abstractRecognizing human emotion from speech has become an active theme in the field of Human- Computer Interaction. The demand for Speech Emotion Recognition systems is growing given its numerous applications. This thesis presents and analyses an implementation of a model which has the ability to recognize emotions (neutral, calm, happy, sad, angry, fear, surprise and disgust) from human speech. The approach contains a multilayer perceptron classifier. It was found that the designed model managed to reach a mean accuracy of 59% when classifying eight emotions, an accuracy of 75% when classifying four emotions and an accuracy of 77% when classifying the data according to its emotional valance. This performance compares favorably with more complex classifiers reported in the literature. Also, an experiment was conducted where participants classified recordings from the same dataset. From the obtained results is concluded that the proposed model performs comparable to humans, which scored a mean performance of 60% on the classifying task, but the model is more consistent.en_US
dc.embargo.lift10000-01-01
dc.embargo.typePermanent embargoen_US
dc.identifier.urihttps://theses.ubn.ru.nl/handle/123456789/12605
dc.language.isoenen_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationBachelor Artificial Intelligenceen_US
dc.thesis.studyprogrammeArtificial Intelligenceen_US
dc.thesis.typeBacheloren_US
dc.titleRecognizing emotion in speech using a multilayer perceptronen_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
4568303 Pelt.pdf
Size:
1.24 MB
Format:
Adobe Portable Document Format