Please use this identifier to cite or link to this item: https://idr.l2.nitk.ac.in/jspui/handle/123456789/6966
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRadhakrishnan, V.
dc.contributor.authorJoseph, C.
dc.contributor.authorChandrasekaran, K.
dc.date.accessioned2020-03-30T09:46:31Z-
dc.date.available2020-03-30T09:46:31Z-
dc.date.issued2018
dc.identifier.citationProcedia Computer Science, 2018, Vol.143, , pp.626-634en_US
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/6966-
dc.description.abstractSentiment analysis on video is quite an unexplored field of research wherein the emotion and sentiment of the speaker are extracted by processing the frames, audio and text obtained from the video. In recent times, sentiment analysis from naturalistic audio has been an upcoming field of research. This is typically done by performing automatic speech recognition on audio, followed by extracting the sentiment exhibited by the speaker. On the other hand, techniques for extracting sentiments from text are quite developed and tech giants have already optimized these methods to process large amounts of customer review, feedback and reactions. In this paper, a new model for sentiment analysis from audio is proposed which is a hybrid of Keyword Spotting System (KWS) and Maximum Entropy (ME) Classifier System. This model is developed with the aim to outperform other conventional classifiers and to provide a single integrated system for audio and text processing. In addition, a web application for dynamic processing of YouTube videos is described. The WebApp provides an index-based result for each phrase that is detected in the video. Often, the emotion of the viewer of a video corresponds to its content. In this regard, it is useful to map these emotions to the text transcript of the video and assign a suitable weight to it while predicting the sentiment that the speaker exhibits. This paper describes such an application that was developed to analyze facial expressions using Affdex API. Thus, using the combined statistics from all the three aforementioned components, a robust and portable system for emotion detection is obtained that provides accurate predictions and can be deployed on any modern systems with minimal configuration changes. � 2018 The Authors. Published by Elsevier B.V.en_US
dc.titleSentiment extraction from naturalistic videoen_US
dc.typeBook chapteren_US
Appears in Collections:2. Conference Papers

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.