This article is a part of our series on political bias detection we will hopefully introduce you to the various aspects of our political bias detection system, and you can learn about:
- How can we predict the political orientation behind a piece of the news?
- What is Stance detection? and what are the different types of it?
- What is subjective stance detection? what is distance supervision? and why they make a cute couple?
- What is ALSA, ELSA and how can opinion mining save us from political bias?
- How to implement an initial political bias detector just from sentiment analysis and some probabilistic distribution? (warning cool visualizations)
- How to Visualize a Political Bias Metric
Stance detection is an important task in NLP concerned with evaluating the text stance towards a specific predefined target.
The task of Stance Detection and Stance Analysis can be used as a broad term to cover many sub-tasks there are 2 most dominant types of stance detection:
Objective Stance Detection
In this type given a piece of text (usually an news article but can be smaller such as a comment in a community thread) and a claim (usually the claim would be a full sentence such as a news headline) the task is to return on of the following tags:
- Disagree: the article disagrees with the given claim
- Agree: the article agrees with the given claim
- Discuss: the article is related to the claim yet does not impose any sentiment towards the claim’s validity
- Unrelated: this piece of text is totally unrelated to the claim
this type is usually used for fact-checking claims to find trust-worthy resources that either agrees or disagrees with the claim. Furthermore, in this type the articles are usually objective i.e. the article does not have an opinion regarding the claim, it simply states
Emergent is an online system that captures viral claims and collects articles that are relevant to it, then verifies them (I am not quit sure weather the verification process is done automatically or not) yet it mostly relays on manual sites such as Snops .
See ,  and from the FNC,  is related to click bait, and most importantly this unpuplished paper in which the authors use both the FNC data and also utilizes data from the less related semeval 2018 task 12.
Regarding Arabic: the most promising work was done through the “Check that!” labs at CLEF 2018 and 2019,  is the 2018 version paper and here is the corpus. This is the 2019 version and it is still an ongoing challenge thus any work we do might be submitted as a paper,  is the 2019 version overview paper. I really like the proposed pipeline as it is very suitable
Subjective Stance Detection
In this type given a shorter piece of text representing an opinion (usually a comment in social media) and a target which in comparison with the previous type is a single named entity instead of a full claim. The model should give one of three categories (Favor, Against, None) for example:
Target: legalization of abortion
Text: The pregnant are more than walking incubators, and have rights
Semeval 2016 task 6 tackles this issue here is the challenge paper  and ,  are examples of submissions for this task.  is an amazing reference to understand the task of stance detection in tweets and its relation to sentiment analysis. This challenge tackles the same task but only on one target “the independence of Catalonia”.  applies the same technology to news articles rather than tweets see This demo. didn’t find any Arabic papers or dataSets.
A more advanced version of this task is to first identify the targets and then generate the stance, this field of target identification is still in its infancy an example is subtask C of semeval 2019 task 6 and subtask A of semeval 2016 task 5
By now I hope that you have an initial understanding of the tasks of stance detection.
If you are interested in subjective stance detection and you are planning to start your customer analysis service then jump to our piece on this task.
We are planning to publish a new article about objective stance detection and fact checking.
In any case don’t forget to check out the references below.
Do you know that we use all this and other AI technologies in our app? Look at what you’re reading now applied in action. Try our Almeta News app. You can download it from google play: https://play.google.com/store/apps/details?id=io.almeta.almetanewsapp&hl=ar_AR”
 A. Hanselowski et al., “A retrospective analysis of the fake news challenge stance detection task,” ArXiv Prepr. ArXiv180605180, 2018.
 B. Riedel, I. Augenstein, G. P. Spithourakis, and S. Riedel, “A simple but tough-to-beat baseline for the Fake News Challenge stance detection task,” ArXiv Prepr. ArXiv170703264, 2017.
 C. Conforti, M. T. Pilehvar, and N. Collier, “Towards Automatic Fake News Detection: Cross-Level Stance Detection in News Articles,” in Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), 2018, pp. 40–49.
 P. Bourgonje, J. M. Schneider, and G. Rehm, “From clickbait to fake news detection: an approach based on detecting the stance of headlines to articles,” in Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism, 2017, pp. 84–89.
 R. Baly, M. Mohtarami, J. Glass, L. Màrquez, A. Moschitti, and P. Nakov, “Integrating stance detection and fact checking in a unified corpus,” ArXiv Prepr. ArXiv180408012, 2018.
 T. Elsayed et al., “CheckThat! at CLEF 2019: Automatic Identification and Verification of Claims,” in European Conference on Information Retrieval, 2019, pp. 309–315.
 S. Mohammad, S. Kiritchenko, P. Sobhani, X. Zhu, and C. Cherry, “Semeval-2016 task 6: Detecting stance in tweets,” in Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), 2016, pp. 31–41.
 I. Augenstein, A. Vlachos, and K. Bontcheva, “Usfd at semeval-2016 task 6: Any-target stance detection on twitter with autoencoders,” in Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), 2016, pp. 389–393.
 G. Zarrella and A. Marsh, “Mitre at semeval-2016 task 6: Transfer learning for stance detection,” ArXiv Prepr. ArXiv160603784, 2016.
 S. M. Mohammad, P. Sobhani, and S. Kiritchenko, “Stance and sentiment in tweets,” ACM Trans. Internet Technol. TOIT, vol. 17, no. 3, p. 26, 2017.
 S. Ruder, J. Glover, A. Mehrabani, and P. Ghaffari, “360 stance detection,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, 2018, pp. 31–35.
 M. Pontiki et al., “Semeval-2016 task 5: Aspect based sentiment analysis,” in Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), 2016, pp. 19–30.
 K. Schouten and F. Frasincar, “Survey on aspect-level sentiment analysis,” IEEE Trans. Knowl. Data Eng., vol. 28, no. 3, pp. 813–830, 2015.