Kona Sreenath Reddy, S Sunitha


We have developed many neural systems with loss of convenience functions to understand the enveloping emotions. We learn the decorations of emotions from tweets with good and bad feelings like a remotely supervised body, without manual annotations. In this document, we recommend that you learn the word encapsulation of the so-called emoticons in emotion analysis. The vertical strategy is to represent each word as a hot-key that has a length of vocabulary and only one dimension is 1, with all other words being. To be able to learn to effectively integrate emotions, we have developed many neural systems to capture text sense, as well as word contexts with dedicated loss functions. We collect emotion information at the wholesale level instantly from Twitter. This depends on the glory that the larger training data usually leads to more effective representation of the words. To ensure the superiority of extended words, we set the minimum for each category to combine high-quality rich products with extended words. We conduct an experimental evaluation of the effectiveness of the feeling of the loop using three tasks to analyze the feeling. Current foundation learning approaches are primarily based on distribution assumptions. However, it can be a tragedy to analyze feelings because they have polarity marks of opposite feeling.


N. Yang, S. Liu, M. Li, M. Zhou, and N. Yu, “Word alignment modeling with context dependent deep neural network,” in Proc. 51st Annu. Meeting Assoc. Comput. Linguistics, 2013, pp. 166–175.

P. Nakov, S. Rosenthal, Z. Kozareva, V. Stoyanov, A. Ritter, and T. Wilson, “Semeval-2013 task 2: Sentiment analysis in twitter,” in Proc. Int. Workshop Semantic Eval., 2013, vol. 13, pp. 312–320.

M. Baroni, G. Dinu, and G. Kruszewski, “Don’t count, predict! A systematic comparison of Context-counting vs. Context-predicting semantic vectors,” in Proc. 52nd Annu. Meeting Assoc. Comput. Linguistics, 2014, pp. 238–247.

K. Gimpel, N. Schneider, B. O’Connor, D. Das, D. Mills, J. Eisenstein, M. Heilman, D. Yogatama, J. Flanigan, and N. A. Smith, “Part-of-speech tagging for twitter: Annotation, features, and experiments,” in Proc. Annu. Meeting Assoc. Comput. Linguistics, 2011, pp. 42–47.

Duyu Tang, Furu Wei, Bing Qin, Nan Yang, Ting Liu, and Ming Zhou, “Sentiment Embeddings with Applicationsto Sentiment Analysis”, ieee transactions on knowledge and data engineering, vol. 28, no. 2, february 2016.

R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” in Proc. Conf. Empirical Methods Natural Lang. Process., 2013, pp. 1631–1642.

A. Mnih and G. Hinton, “Three new graphical models for statistical language modelling,” in Proc. 24th Int. Conf. Mach. Learning, 2007, pp. 641–648.

Full Text: PDF


  • There are currently no refbacks.

Copyright © 2012 - 2018, All rights reserved.|

Creative Commons License
International Journal of Innovative Technology and Research is licensed under a Creative Commons Attribution 3.0 Unported License.Based on a work at IJITR , Permissions beyond the scope of this license may be available at