[{"command":"openDialog","selector":"#drupal-modal","settings":null,"data":"\u003Cdiv id=\u0022republish_modal_form\u0022\u003E\u003Cform class=\u0022modal-form-example-modal-form ecl-form\u0022 data-drupal-selector=\u0022modal-form-example-modal-form\u0022 action=\u0022\/en\/article\/modal\/7355\u0022 method=\u0022post\u0022 id=\u0022modal-form-example-modal-form\u0022 accept-charset=\u0022UTF-8\u0022\u003E\u003Cp\u003EHorizon articles can be republished for free under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence.\u003C\/p\u003E\n \u003Cp\u003EYou must give appropriate credit. We ask you to do this by:\u003Cbr \/\u003E\n 1) Using the original journalist\u0027s byline\u003Cbr \/\u003E\n 2) Linking back to our original story\u003Cbr \/\u003E\n 3) Using the following text in the footer: This article was originally published in \u003Ca href=\u0027#\u0027\u003EHorizon, the EU Research and Innovation magazine\u003C\/a\u003E\u003C\/p\u003E\n \u003Cp\u003ESee our full republication guidelines \u003Ca href=\u0027\/horizon-magazine\/republish-our-stories\u0027\u003Ehere\u003C\/a\u003E\u003C\/p\u003E\n \u003Cp\u003EHTML for this article, including the attribution and page view counter, is below:\u003C\/p\u003E\u003Cdiv class=\u0022js-form-item form-item js-form-type-textarea form-item-body-content js-form-item-body-content ecl-form-group ecl-form-group--text-area form-no-label ecl-u-mv-m\u0022\u003E\n \n\u003Cdiv\u003E\n \u003Ctextarea data-drupal-selector=\u0022edit-body-content\u0022 aria-describedby=\u0022edit-body-content--description\u0022 id=\u0022edit-body-content\u0022 name=\u0022body_content\u0022 rows=\u00225\u0022 cols=\u002260\u0022 class=\u0022form-textarea ecl-text-area\u0022\u003E\u003Ch2\u003EQ\u0026A: Why cultural nuance matters in the fight against online extreme speech \u003C\/h2\u003E\u003Cp\u003EFactcheckers who operate independently of large media corporations or social media companies can shape and use AI to go beyond keywords to help locate context-specific patterns, according to Prof. Udupa. This is because they are trained to pick up disinformation \u2014 and extreme speech is a very close cousin of that, she says.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EWhat motivated you to look into online abuse and extreme speech?\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EI was looking at online mediation of political cultures in India when I noticed the high prevalence of online abuse. Online abuse appears to be the language of today\u0027s politics.\u003C\/p\u003E\u003Cp\u003EOnline speech has become such an influential way of experiencing politics today \u2014 not just participating in democratic processes like elections \u2014\u0026nbsp;but also the way we lead our daily democratic life through digital communication. If these sorts of irreverent and vitriolic exchanges are so prominent online, then we need to see how we can bring nuance to understanding them.\u003C\/p\u003E\u003Cp\u003EIn some ways, online vitriol is presented as funny, but it could also lead to intimidation and shaming. We don\u0027t know exactly when the jokes stop, and the insults begin \u2026 when the insults stop and when the intimidation starts. To understand this slippery slope, it\u0027s really important to understand the context.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EHow effective are extreme speech pushback mechanisms? \u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EIncreasingly, companies and governments are trying to deploy AI systems to combat the scale and speed of extreme speech. From my research, it is apparent extreme speech is heavily context dependent, and the datasets AI algorithms are based on are not.\u003C\/p\u003E\u003Cp\u003ECompanies such as Facebook tend to look into instances of extreme speech when things get out of hand or when this context is extremely important, for instance, the US or Indian elections because it involves huge numbers of people.\u003C\/p\u003E\u003Cp\u003EHowever, corporate AI systems still lack the linguistic competence to detect problematic speech around the world \u2014 for example, in the northeastern Indian state of Assam (Facebook\u0027s) AI systems did not pick up an \u003Ca href=\u0022https:\/\/time.com\/5712366\/facebook-hate-speech-violence\/\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003Eupsurge in rhetoric against religious and ethnic minorities\u003C\/a\u003E in 2019. In contrast, the company beefed up resources and tried to recruit people who can speak the language to combat extreme speech in Myanmar, a region that grabbed international attention after the \u003Ca href=\u0022https:\/\/www.hrw.org\/news\/2017\/09\/25\/crimes-against-humanity-burmese-security-forces-against-rohingya-muslim-population\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003Earmy cracked down on Rohingya Muslims\u003C\/a\u003E (in 2017) sending thousands fleeing across the border into Bangladesh.\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe way social media companies are tackling online extreme speech is therefore fragmented \u2014 both in terms of training datasets to train their AI models and implementing timely action during unfolding crises.\u003C\/p\u003E\u003Cp\u003ETo address this unevenness we need to create collaborative frameworks that don\u2019t just focus on English, Mandarin and Spanish, but include many different languages. We also have to go beyond a keyword-based approach and identify cultural and contextual markers.\u003C\/p\u003E\u003Cp\u003EThe best way to do that is to mobilise and connect existing communities like factcheckers who can bring cultural nuance and contextual knowledge. They\u0027re trained to pick up disinformation, and extreme speech is a very close cousin of that.\u003C\/p\u003E\u003Cp\u003E\u003Cblockquote class=\u0022tw-text-center tw-text-blue tw-font-bold tw-text-2xl lg:tw-w-1\/2 tw-border-2 tw-border-blue tw-p-12 tw-my-8 lg:tw-m-12 lg:tw--ml-16 tw-float-left\u0022\u003E\n \u003Cspan class=\u0022tw-text-5xl tw-rotate-180\u0022\u003E\u201c\u003C\/span\u003E\n \u003Cp class=\u0022tw-font-serif tw-italic\u0022\u003E\u2018What is seen as extreme speech in one particular country might not be the case in a different country.\u2019\u003C\/p\u003E\n \u003Cfooter\u003E\n \u003Ccite class=\u0022tw-not-italic tw-font-normal tw-text-sm tw-text-black\u0022\u003EProfessor Sahana Udupa, Ludwig Maximilian University of Munich, Germany\u003C\/cite\u003E\n \u003C\/footer\u003E\n\u003C\/blockquote\u003E\n\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EHow are you tackling extreme speech in your project \u003Ca href=\u0022https:\/\/cordis.europa.eu\/project\/id\/957442\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003EAI4Dignity\u003C\/a\u003E?\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EWe are working on a community-based way to pick up problematic online extreme speech. The hope is to build a framework that can help factcheckers to flag this content without disrupting their core activities of reporting disinformation.\u003C\/p\u003E\u003Cp\u003EIn the coming months we are requesting factcheckers to bring these annotated datasets and these will be the basis for training AI models. If everything goes well, in July we will bring factcheckers, academic researchers, and AI developers together. Afterwards, we hope to develop a tool based on multiple algorithms and integrate it with at least one particular platform or browser.\u003C\/p\u003E\u003Cp\u003EThe model should be able to pick up some expressions that are problematic. It might not reach top-notch accuracy at the moment, but it\u0027s a stepping stone. The very dynamism of this process means it has to be repeated.\u003C\/p\u003E\u003Cp\u003EAnd we could replicate this process on a grander scale, with a wider network of factcheckers, bring in languages and dialects from more countries if more funding comes our way.\u003C\/p\u003E\u003Cp\u003EThe other aim of the project is to also see if there are globally shared patterns of extreme speech. For example, criticism of legacy media has been well documented among different right-wing groups. But we want to actually investigate datasets and see if there are globally circulating tropes \u2014\u0026nbsp;if the Trump supporter, for instance, is actually providing discursive resources for the Hindu nationalist back in India, or if this anti-immigrant discourse is being picked up in Brazil and so on.\u003C\/p\u003E\u003Cp\u003EProjects like ours will help create critical knowledge that might not have applicability the very next day but will have long-term societal benefit. We are trying to create pushback mechanisms to regressive anti-immigrant and xenophobic discourses.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EHow stark are cultural differences in the expression of online extreme speech?\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EThe gut feeling is that there\u0027s a lot of variation, but we also have documented this in our research. It\u2019s very clear that some expressions are very culturally rooted, as are target groups. For instance, it has been documented that people in northern Chile who are themselves marginalised try to peddle anti-immigrant discourse against people who come from places like Bolivia and Peru.\u003C\/p\u003E\u003Cp\u003EBut when you look at a country like Denmark, their millionaires have supported far right-wing movements. So, there\u0027s this vast variation in who actually engages in extreme speech.\u003C\/p\u003E\u003Cp\u003EComplicating matters further, people who peddle derogatory and extreme speech practices engage in wordplay and adopt coded language to avoid detection.\u003C\/p\u003E\u003Cp\u003EAnd for us, that\u0027s important to understand how cultural variation is not just in the world of words, i.e. the kinds of expressions that people use, but also the actors who engage in them and the political structures that foster vitriolic exchanges.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EDo you see patterns between extreme speech in homeland and diaspora communities?\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EHomeland communities and diaspora communities are closely connected because of internet channels so what we see is a sort of shared discourse. Whether you\u0027re in favor of a particular political ideology or not, expressions and tropes circle between the communities.\u003C\/p\u003E\u003Cp\u003EBut there could still be cultural variation, I wouldn\u0027t rule it out. What is seen as extreme speech in one particular country might not be the case in a different country. For example, \u2018anti-national\u2019 could be a derogatory label in some countries \u2014 in others not so much. Labels themselves evolve within countries and regions.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EOnline extreme speech is often linked to offline violence. What does your research show?\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EAnswering this question requires comprehensive case-study based field work and research like that has been undertaken in places like Kenya and South Africa suggesting that particular social media discourses could escalate conflict situations. There is also some data indicative of this association in North America and Europe. In India, I \u003Ca href=\u0022https:\/\/www.routledge.com\/Media-as-Politics-in-South-Asia\/Udupa-McDowell\/p\/book\/9780367885113\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003Edocumented\u003C\/a\u003E what was referred to a \u0027\u201csocial media riot\u0027 because of the circulation of a video on WhatsApp before a planned protest rally. This was a complex social event since protesting Muslims were accused of being \u0027incited\u0027 by mashed up videos that claimed to depict violence in Myanmar and Northeast India.\u003C\/p\u003E\u003Cp\u003EBut to actually pin down causality between online extreme speech and offline violence is very difficult. However, you can clearly identify trends and correlations. In certain cases, there will be a peak in extreme speech expressions prior to a violent episode.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003E\u003Cem\u003EThis interview has been edited for length and clarity.\u003C\/em\u003E\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EThe research in this article was funded by the EU\u2019s European Research Council. If you liked this article, please consider sharing it on social media.\u003C\/em\u003E\u003C\/p\u003E\u003C\/textarea\u003E\n\u003C\/div\u003E\n\n \u003Cdiv id=\u0022edit-body-content--description\u0022 class=\u0022ecl-help-block description\u0022\u003E\n Please copy the above code and embed it onto your website to republish.\n \u003C\/div\u003E\n \u003C\/div\u003E\n\u003Cinput autocomplete=\u0022off\u0022 data-drupal-selector=\u0022form-xfe-qxmuhellg0syrp-oysvr6f6dc7vegt-vnbbpaxw\u0022 type=\u0022hidden\u0022 name=\u0022form_build_id\u0022 value=\u0022form-XFe_qxMuHEllg0SYrp_OYSvr6f6dC7veGt_vNBbpAxw\u0022 \/\u003E\n\u003Cinput data-drupal-selector=\u0022edit-modal-form-example-modal-form\u0022 type=\u0022hidden\u0022 name=\u0022form_id\u0022 value=\u0022modal_form_example_modal_form\u0022 \/\u003E\n\u003C\/form\u003E\n\u003C\/div\u003E","dialogOptions":{"width":"800","modal":true,"title":"Republish this content"}}]