Aspect-level Vs Entity-level Sentiment Analysis

Aspect-level Vs Entity-level Sentiment Analysis

First, a motivational example:

Many products on the internet allow the user to leave some feedback. This feedback is usually reviewed manually to figure out what are the users likes or dislikes in the product, what are the features they desire, and what are the problems they are facing. However wouldn’t it be amazing if we can find what specific aspect of the product does the review like or dislike? for example, in a review of a smartphone like the following:

A: The RAM is really small, the price is low though

Would it be possible to figure out that the author hates the small RAM but is happy about the cheap price? OK, another question: what if another comment was like this:

B: I totally hate this phone the memory is very low

Would it possible to figure out that the 2 comments are actually criticizing the same thing which is the memory capacity?

Well this is what we are talking about today. But first, let us understand the different levels of opinion mining.

This article is a part of our series on political bias detection we will hopefully introduce you to the various aspects of our political bias detection system, and you can learn about:

Intro to opinion mining levels

In general, opinion mining aims to extract a quintuple <e, a, s, h, t> from texts, where:

  • e: is the entity or the target in our previous example the word “RAM” or the word “memory”,
  • a: is the aspect of the entity e in our example we can combine the 2 words under a single concept like “memory capacity”
  • next h which is the opinion holder, here the guys A and B
  • t: is the time when the opinion holder expresses her opinion on the entity e, which would be the time of the comments
  • and finally s is the opinion or the sentiment expressed which is negative.

For example, opinion mining processes the review text “I bought a new iPhoneX today, the screen is great, but the voice quality is poor” and outputs two quintuples <iPhoneX, screen, great, I, today> and <iPhoneX, voice quality, poor, I, today>.

The main difference between the levels of opinion mining is the amount of information needed from this quintuple with higher level tasks requiring lower amount of information, the following table summarizes the various levels for the sentence “I bought a new iPhoneX today, the screen is great, but the voice quality is poor”, and we will revisit them in more depth later on.

Task Information needed Description Example
Sentiment analysis <s> Given a piece of text extract the overall sentiment polarity <Positive>
Stance Analysis <e,s> Given a piece of text and a target extract the sentiment polarity towards that target <iPhoneX, Positive>
Aspect level sentiment Analysis <e,a,s> Given a piece of text, a target and a particular aspect of that target extract the sentiment polarity towards that aspect <iPhoneX, screen, Positive>
Entity level sentiment analysis
This is a lower level in which we extract the sentiment towards all the key-words (possible targets) within a piece of text regardless of whether they refer the same aspect or target <screen, Positive> <voice, Negative>

What is ALSA and How is it Done

Different from stance detection, aspect level sentiment analysis aims at detecting relevant aspects and opinions. Following the general opinion mining framework, aspect mining can be formalized as the task of extracting triple <e; a; s> ( e means target, a and s represent aspect and opinion respectively). An aspect can encompass multiple entities for example the entities “speed, latency, throughput, … ” can be grouped together in a single aspect namely responsivness. This task can be divided into 4 main subtasks, illustrated for the following sentence from [1]

“أعجب من كون الكتاب لم يصلنا إلا توًا كتاب رائع لموضوع مهم مسطرًا بلغة جميلة من كاتبة مبدعة”

  • T1 Aspect term extraction: Given a piece of text the task is to extract all the terms that can constitute targets, from the previous example the terms would be (“كتاب”, “لموضوع”, “لغة”, “كاتبة”) Note that the terms are not necessarily a single word as they can be any span of nominal phrases.
  • T2 Aspect term polarity detection: from the output of the previous task this task tries to assign to each of the terms a polarity tag (positive, negative, neutral), the result of the previous example would be
Term Polarity
كتاب positive
موضوع positive
لغة positive
كاتبة positive
  • T3 Aspect category identification: this is greatly similar to target identification in subtask C of semeval 2019 task 6 and subtask A of semeval 2016 task 5 , In this task given a closed list of predefined aspects and a piece of text the goal is to extract the aspects within the text. here an aspect encompass more information than a simple term, the mapping between terms and aspect of the previous example is as follows:
Term aspect
كتاب اصل
موضوع محتوى
لغة اسلوب
كاتبة اصل

Note that the identification of the aspect category can be done either directly from the text and aspect list, or it can be done using the output of task T1 by using topic models like LDA [2]

  • T4 Aspect category polarity identification: finally in this task given the identified aspects from T3 this task assigns for each of these aspects a polarity class from (positive, negative, neutral), again Note that this can be done using the output of T3 or by grouping the results from T4. The results of this task on the aforementioned example is as follows:
Aspect Polarity
اصل Positive
محتوى Positive
اسلوب positive

The Data

There is a lot of data sets for the task of aspect level sentiment analysis in English see the following sem-eval tasks: 2014 task 4, 2015 task 12, and 2016 task 5 , in Arabic there is 3 available datasets HAAD based on books reviews from good reads, 2016 task 5 based on hotel reviews from TripAdvisor and ABSA using news reviews from the 2014 war between Gaza and Israel.

The task of ALSA is well defined for the domain of product reviews, therefore the first 2 datasets have a relatively high quality although their domain is rather limited, on the other hand, while the latter data set is the most important for our task it has 2 main problems: firstly, the domain of the dataset is very limited in scope, and secondly and most importantly the annotation of the data with regards to the polarization and the aspects categories. the foll shows an example.



example of annotation errors from ABSA dataset

Implementation details

  • The most naive manner in which this task can be tackled is by solving T2 using lexicons. In this scheme a lexicon of sentiment is used to find sentiment words in in the text and then linking the sentiment with the closest target word. In both Arabic [3] and English [4] the method usually relay on sequence tagging models mostly shallow models mainly conditional random fields trained using a plethora of syntactic features (POS, Lemma, dependency tree), lexicon features, and semantic features like words embeddings, there is a lot of emphases on the text preprocessing task in Arabic see [5]. This is motivated by the low amount of available data.
  • The SOTA in this task depends on 2 subsystems (aspect identification and sentiment assignment) and while the latter have usually a high performance [6] and [4] report an f-measure of nearly 81% for that task, the former task of target identification have really low performance 66% in [4] and 69% in [6]
  • Nonetheless This task is not directly applicable to our task, mainly because of the absence of proper training data with the correct aspects (ABSA is the only Arabic data set related to news and its polarity and aspects choice is very noisy). Furthermore, building such a data set is extremely hard at least in comparison with the simpler tasks of stance detection and direct political bias detection.

Now What is ELSA?

This task is a lower level of the Aspect level sentiment Analysis, it can be seen as the application of both T1 and T2 from the ALSA. Namely the input to the system consists of only text, which can be comprised of one or multiple sentences, contain multiple entities with a varying sentiment, and have different domains. Our goal is to identify the important entities towards which opinions are expressed in the text; these can include any nominal or noun phrase, including events, or concepts, and they are not restricted to named entities. The only constraint is that the entities need to be explicitly mentioned in the text. See this demo to understand the task.

Corpora and Data Sources

For the full task of ELSA the dataset of AOT is an arabic dataset specifically developed for this task , furthermore since this task can be divided into 2subtasks as we shall see next the datasets of HAAD, ABSA, and 2016 task 5 can also be used either for the whole task or for the first part (Target detection) only

Implementation details

As mentioned above this task incorporates the tasks T1 and T2 from the Aspect level sentiment analysis task namely entity identification and polarity assignment, In English the mainstream method is the utilization of deep learning end-to-end models to both identify the targets and assign the sentiment tag toward them [7], this is motivated by the fact that jointly training on the 2 tasks can boost the performance of them and that a cascaded system in which T2 is tackled after T1 can cause a propagation of errors. However the use of deep learning is facilitated by the abundance of data on both the Entity level and Aspect level sentiment analysis in English.

In Arabic However the lack of data dictates the use of feature engineering with shallow sequence tagging models such as CRFs. Furthermore, these models address the 2 tasks separately with one model to find the targets and another model to assign sentiment to the found target, while this causes misses in target identification to affect the sentient analysis model and fails to share the information between the 2 models. This scheme have an added benefit which is the ability to do sentiment analysis on any supplied list of targets by substituting the closed list in place of the target detector. Furthermore, such models are extremely fast in both training and prediction phases.

The authors in [5] handle this task in Arabic using the aforementioned scheme, they relay on various features on the lexical and syntactic levels such as (POS, NER, dependency trees paths, Lemmas, and words segments), on the semantic level (using KNN clustering of the words embeddings) and by using Arabic and English semantic lexicons. For the tasks of segmentation, lemmatization and morphological features extraction the authors relay on the closed source MADAMIRA , however, a lot of the features of that system can be extracted using FARASA which is open sourced (with the exception of detailed words segmentation “D3” and detailed dependency parsing and POS tags)

Based on this the final model to achieve this can utilize the method described in [5] combined with a Topic model to move from the entity to Target level.

Conclusion

At this point hopefully you have a basic idea of the entity and aspect level sentiment analysis, so while designing the next Amazon you should be able to analyze how your customers are reacting to your product and hopefully, you will be able to get better insights on how to please them.

Do you know that we use all this and other AI technologies in our app? Look at what you’re reading now applied in action. Try our Almeta News app. You can download it from google play: https://play.google.com/store/apps/details?id=io.almeta.almetanewsapp&hl=ar_AR

Further reading

[1] M. Al-Smadi, O. Qawasmeh, B. Talafha, and M. Quwaider, “Human annotated arabic dataset of book reviews for aspect based sentiment analysis,” in 2015 3rd International Conference on Future Internet of Things and Cloud, 2015, pp. 726–730.

[2] D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,” J. Mach. Learn. Res., vol. 3, no. Jan, pp. 993–1022, 2003.

[3] M. Pontiki et al., “Semeval-2016 task 5: Aspect based sentiment analysis,” in Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), 2016, pp. 19–30.

[4] T. Hercig, T. Brychcín, L. Svoboda, and M. Konkol, “Uwb at semeval-2016 task 5: Aspect based sentiment analysis,” in Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), 2016, pp. 342–349.

[5] N. Farra and K. McKeown, “Smarties: Sentiment models for arabic target entities,” ArXiv Prepr. ArXiv170103434, 2017.

[6] A.-S. Mohammad, M. Al-Ayyoub, H. N. Al-Sarhan, and Y. Jararweh, “An aspect-based sentiment analysis approach to evaluating arabic news affect on readers,” J. Univers. Comput. Sci., vol. 22, no. 5, pp. 630–649, 2016.

Leave a Reply

Your email address will not be published. Required fields are marked *