Please use this identifier to cite or link to this item: https://idr.l1.nitk.ac.in/jspui/handle/123456789/6888
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAshwin, T.S.-
dc.contributor.authorSaran, S.-
dc.contributor.authorRam Mohana Reddy, Guddeti-
dc.date.accessioned2020-03-30T09:46:19Z-
dc.date.available2020-03-30T09:46:19Z-
dc.date.issued2017-
dc.identifier.citation2016 IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics Engineering, UPCON 2016, 2017, Vol., , pp.416-421en_US
dc.identifier.urihttps://idr.nitk.ac.in/jspui/handle/123456789/6888-
dc.description.abstractVideo Affective Content Analysis is an active research area in computer vision. Live Streaming video has become one of the modes of communication in the recent decade. Hence video affect content analysis plays a vital role. Existing works on video affective content analysis are more focused on predicting the current state of the users using either of the visual or the acoustic features. In this paper, we propose a novel hybrid SVM-RBM classifier which recognizes the emotion for both live streaming video and stored video data using audio-visual features; thus recognizes the users' mood based on categorical emotion descriptors. The proposed method is experimented for human emotions recognition for live streaming data using the devices such as Microsoft Kinect and Web Cam. Further we tested and validated using standard datasets like HUMANE and SAVEE. Classification of emotion is performed for both acoustic and visual data using Restricted Boltzmann Machine (RBM) and Support Vector Machine (SVM). It is observed that SVM-RBM classifier outperforms RBM and SVM for annotated datasets. � 2016 IEEE.en_US
dc.titleVideo Affective Content Analysis based on multimodal features using a novel hybrid SVM-RBM classifieren_US
dc.typeBook chapteren_US
Appears in Collections:2. Conference Papers

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.