Stance Detection - State of the Art

Stance Detection – State of the Art

This article is a part of our series on political bias detection we will hopefully introduce you to the various aspects of our political bias detection system, and you can learn about:

Stance detection is an important task in NLP concerned with evaluating the text stance towards a specific predefined target.

The task of Stance Detection and Stance Analysis can be used as a broad term to cover many sub-tasks there are 2 most dominant types of stance detection:

Objective Stance Detection

In this type given a piece of text (usually an news article but can be smaller such as a comment in a community thread) and a claim (usually the claim would be a full sentence such as a news headline) the task is to return on of the following tags:

  • Disagree: the article disagrees with the given claim
  • Agree: the article agrees with the given claim
  • Discuss: the article is related to the claim yet does not impose any sentiment towards the claim’s validity
  • Unrelated: this piece of text is totally unrelated to the claim

this type is usually used for fact-checking claims to find trust-worthy resources that either agrees or disagrees with the claim. Furthermore, in this type the articles are usually objective i.e. the article does not have an opinion regarding the claim, it simply states i it is true or false. Finally, this task is very much related to fact checking

This type of stance detection was the focus of Fake news challenge, subtask A of semeval 2019 task 7, task 8 of semeval 2017 and is related to subtask A of semeval 2019 task 8.

Emergent is an online system that captures viral claims and collects articles that are relevant to it, then verifies them (I am not quit sure weather the verification process is done automatically or not) yet it mostly relays on manual sites such as Snops .

See [1], [2] and [3]from the FNC, [4] is related to click bait, and most importantly this unpuplished paper in which the authors use both the FNC data and also utilizes data from the less related semeval 2018 task 12.

Regarding Arabic: the most promising work was done through the “Check that!” labs at CLEF 2018 and 2019, [5] is the 2018 version paper and here is the corpus. This is the 2019 version and it is still an ongoing challenge thus any work we do might be submitted as a paper, [6] is the 2019 version overview paper. I really like the proposed pipeline as it is very suitable to both fact-checking and stance detection.

Subjective Stance Detection

In this type given a shorter piece of text representing an opinion (usually a comment in social media) and a target which in comparison with the previous type is a single named entity instead of a full claim. The model should give one of three categories (Favor, Against, None) for example:

Target: legalization of abortion

Text: The pregnant are more than walking incubators, and have rights

Stance: Favor

Semeval 2016 task 6 tackles this issue here is the challenge paper [7] and [8], [9] are examples of submissions for this task. [10] is an amazing reference to understand the task of stance detection in tweets and its relation to sentiment analysis. This challenge tackles the same task but only on one target “the independence of Catalonia”. [11] applies the same technology to news articles rather than tweets see This demo. didn’t find any Arabic papers or dataSets.

A more advanced version of this task is to first identify the targets and then generate the stance, this field of target identification is still in its infancy an example is subtask C of semeval 2019 task 6 and subtask A of semeval 2016 task 5

Finally this Idea is very similar to the Idea of Aspect level sentiment analysis tackled by semeval following tasks: 2014 task 4, 2015 task 12, and 2016 task 5 which includes Arabic dataset. See [12] from semeval 2016 task 5 provides a general introduction while [13] provides an older (far more comprehensive) review of the field. To simply understand the task view This demo.

Conclusion

By now I hope that you have an initial understanding of the tasks of stance detection.

If you are interested in subjective stance detection and you are planning to start your customer analysis service then jump to our piece on this task.

We are planning to publish a new article about objective stance detection and fact checking.

In any case don’t forget to check out the references below.

Do you know that we use all this and other AI technologies in our app? Look at what you’re reading now applied in action. Try our Almeta News app. You can download it from google play: https://play.google.com/store/apps/details?id=io.almeta.almetanewsapp&hl=ar_AR

Further Reading

[1] A. Hanselowski et al., “A retrospective analysis of the fake news challenge stance detection task,” ArXiv Prepr. ArXiv180605180, 2018.

[2] B. Riedel, I. Augenstein, G. P. Spithourakis, and S. Riedel, “A simple but tough-to-beat baseline for the Fake News Challenge stance detection task,” ArXiv Prepr. ArXiv170703264, 2017.

[3] C. Conforti, M. T. Pilehvar, and N. Collier, “Towards Automatic Fake News Detection: Cross-Level Stance Detection in News Articles,” in Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), 2018, pp. 40–49.

[4] P. Bourgonje, J. M. Schneider, and G. Rehm, “From clickbait to fake news detection: an approach based on detecting the stance of headlines to articles,” in Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism, 2017, pp. 84–89.

[5] R. Baly, M. Mohtarami, J. Glass, L. Màrquez, A. Moschitti, and P. Nakov, “Integrating stance detection and fact checking in a unified corpus,” ArXiv Prepr. ArXiv180408012, 2018.

[6] T. Elsayed et al., “CheckThat! at CLEF 2019: Automatic Identification and Verification of Claims,” in European Conference on Information Retrieval, 2019, pp. 309–315.

[7] S. Mohammad, S. Kiritchenko, P. Sobhani, X. Zhu, and C. Cherry, “Semeval-2016 task 6: Detecting stance in tweets,” in Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), 2016, pp. 31–41.

[8] I. Augenstein, A. Vlachos, and K. Bontcheva, “Usfd at semeval-2016 task 6: Any-target stance detection on twitter with autoencoders,” in Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), 2016, pp. 389–393.

[9] G. Zarrella and A. Marsh, “Mitre at semeval-2016 task 6: Transfer learning for stance detection,” ArXiv Prepr. ArXiv160603784, 2016.

[10] S. M. Mohammad, P. Sobhani, and S. Kiritchenko, “Stance and sentiment in tweets,” ACM Trans. Internet Technol. TOIT, vol. 17, no. 3, p. 26, 2017.

[11] S. Ruder, J. Glover, A. Mehrabani, and P. Ghaffari, “360 stance detection,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, 2018, pp. 31–35.

[12] M. Pontiki et al., “Semeval-2016 task 5: Aspect based sentiment analysis,” in Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), 2016, pp. 19–30.

[13] K. Schouten and F. Frasincar, “Survey on aspect-level sentiment analysis,” IEEE Trans. Knowl. Data Eng., vol. 28, no. 3, pp. 813–830, 2015.

Leave a Reply

Your email address will not be published. Required fields are marked *