Please use this identifier to cite or link to this item:
https://idr.l2.nitk.ac.in/jspui/handle/123456789/11357
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tripathi, A. | - |
dc.contributor.author | Ashwin, T.S. | - |
dc.contributor.author | Ram Mohana Reddy, Guddeti | - |
dc.date.accessioned | 2020-03-31T08:31:11Z | - |
dc.date.available | 2020-03-31T08:31:11Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | IEEE Access, 2019, Vol.7, , pp.51185-51200 | en_US |
dc.identifier.uri | http://idr.nitk.ac.in/jspui/handle/123456789/11357 | - |
dc.description.abstract | With the exponential growth in areas of machine intelligence, the world has witnessed promising solutions to the personalized content recommendation. The ability of interactive learning agents to make optimal decisions in dynamic environments has been proven and very well conceptualized by reinforcement learning (RL). The learning characteristics of deep-bidirectional recurrent neural networks (DBRNN) in both positive and negative time directions has shown exceptional performance as generative models to generate sequential data in supervised learning tasks. In this paper, we harness the potential of the said two techniques and propose EmoWare (emotion-aware), a personalized, emotionally intelligent video recommendation engine, employing a novel context-aware collaborative filtering approach, where the intensity of users' spontaneous non-verbal emotional response toward the recommended video is captured through interactions and facial expressions analysis for decision-making and video corpus evolution with real-time feedback streams. To account for users' multidimensional nature in the formulation of optimal policies, RL-scenarios are enrolled using on-policy (SARSA) and off-policy (Q-learning) temporal-difference learning techniques, which are used to train DBRNN to learn contextual patterns and to generate new video sequences for the recommendation. System evaluation for a month with real users shows that the EmoWare outperforms the state-of-the-art methods and models users' emotional preferences very well with stable convergence. 2013 IEEE. | en_US |
dc.title | EmoWare: A context-aware framework for personalized video recommendation using affective video sequences | en_US |
dc.type | Article | en_US |
Appears in Collections: | 1. Journal Articles |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.