[{"command":"openDialog","selector":"#drupal-modal","settings":null,"data":"\u003Cdiv id=\u0022republish_modal_form\u0022\u003E\u003Cform class=\u0022modal-form-example-modal-form ecl-form\u0022 data-drupal-selector=\u0022modal-form-example-modal-form\u0022 action=\u0022\/en\/article\/modal\/7094\u0022 method=\u0022post\u0022 id=\u0022modal-form-example-modal-form\u0022 accept-charset=\u0022UTF-8\u0022\u003E\u003Cp\u003EHorizon articles can be republished for free under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence.\u003C\/p\u003E\n \u003Cp\u003EYou must give appropriate credit. We ask you to do this by:\u003Cbr \/\u003E\n 1) Using the original journalist\u0027s byline\u003Cbr \/\u003E\n 2) Linking back to our original story\u003Cbr \/\u003E\n 3) Using the following text in the footer: This article was originally published in \u003Ca href=\u0027#\u0027\u003EHorizon, the EU Research and Innovation magazine\u003C\/a\u003E\u003C\/p\u003E\n \u003Cp\u003ESee our full republication guidelines \u003Ca href=\u0027\/horizon-magazine\/republish-our-stories\u0027\u003Ehere\u003C\/a\u003E\u003C\/p\u003E\n \u003Cp\u003EHTML for this article, including the attribution and page view counter, is below:\u003C\/p\u003E\u003Cdiv class=\u0022js-form-item form-item js-form-type-textarea form-item-body-content js-form-item-body-content ecl-form-group ecl-form-group--text-area form-no-label ecl-u-mv-m\u0022\u003E\n \n\u003Cdiv\u003E\n \u003Ctextarea data-drupal-selector=\u0022edit-body-content\u0022 aria-describedby=\u0022edit-body-content--description\u0022 id=\u0022edit-body-content\u0022 name=\u0022body_content\u0022 rows=\u00225\u0022 cols=\u002260\u0022 class=\u0022form-textarea ecl-text-area\u0022\u003E\u003Ch2\u003EGetting AI ethics wrong could \u2018annihilate technical progress\u2019\u003C\/h2\u003E\u003Cp\u003E\u2018It\u0027s very difficult to be an AI researcher now and not be aware of the ethical implications these algorithms have,\u2019 said Professor Bernd Stahl, director of the Centre for Computing and Social Responsibility at De Montfort University in Leicester, UK.\u003C\/p\u003E\u003Cp\u003E\u2018We have to come to a better understanding of not just what these technologies can do, but how they will play out in society and the world at large.\u2019\u003C\/p\u003E\u003Cp\u003EHe leads a project called \u003Ca href=\u0022https:\/\/cordis.europa.eu\/project\/rcn\/217620\/factsheet\/en\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003ESHERPA\u003C\/a\u003E, which is attempting to wrestle with some of the ethical issues surrounding smart information systems that use machine learning, a form of AI, and other algorithms to analyse big data sets.\u003C\/p\u003E\u003Cp\u003EThe intelligent water gun was created with the aim of highlighting how biases in algorithms can lead to discrimination and unfair treatment. Built by an artist for SHERPA, the water gun can be programmed to select its targets.\u003C\/p\u003E\u003Cp\u003E\u2018Our artist has built a water gun with a face recognition on it so it will only squirt water at women or it can be changed to recognise a single individual or people of a certain age,\u2019 said Prof. Stahl. \u2018The idea is to get people to think about what this sort of technology can do.\u2019\u003C\/p\u003E\u003Cp\u003EWhile squirting water at people might seem like harmless fun, the issues are anything but. AI is already used to identify faces on social media, respond to questions on digital home assistants like Alexa and Siri, and suggest products for consumers when they are shopping online.\u003C\/p\u003E\u003Cp\u003EIt is also being used to help make \u003Cu\u003E\u003Ca href=\u0022https:\/\/www.technologyreview.com\/s\/612775\/algorithms-criminal-justice-ai\/\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003Ejudgements about criminals\u2019 risk of reoffending\u003C\/a\u003E\u003C\/u\u003E or even to \u003Cu\u003E\u003Ca href=\u0022https:\/\/www.newscientist.com\/article\/2186512-exclusive-uk-police-wants-ai-to-stop-violent-crime-before-it-happens\/\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003Eidentify those who might commit violent crimes\u003C\/a\u003E\u003C\/u\u003E. Insurers and \u003Cu\u003E\u003Ca href=\u0022https:\/\/www.telegraph.co.uk\/politics\/2018\/03\/21\/hmrc-will-use-robots-check-tax-returns\/\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003Etax authorities\u003C\/a\u003E\u003C\/u\u003E are employing it to help detect fraud, banks have turned to AI to help process loan applications and it is even being\u003Cu\u003E\u003Ca href=\u0022https:\/\/ec.europa.eu\/research\/infocentre\/article_en.cfm?artid=49726\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003E trialled at border checkpoints\u003C\/a\u003E\u003C\/u\u003E.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EImpacts\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EOver the past year, Prof. Stahl and his colleagues have compiled 10 case studies where they have empirically analysed the impacts of these technologies across a number of sectors. These include the use of AI in smart cities, its use by the insurance industry, in education, healthcare, agriculture and by governments.\u003C\/p\u003E\u003Cp\u003E\u2018There are some very high-profile things that cut across sectors, like privacy, data protection and cyber security,\u2019 said Prof. Stahl. \u2018AI is also creating new challenges for the right to work if algorithms can take people\u2019s jobs, or the right to free elections if it can be used to meddle in the democratic process as we saw with \u003Ca href=\u0022https:\/\/www.bbc.com\/news\/technology-43465968\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003ECambridge Analytica\u003C\/a\u003E.\u2019\u003C\/p\u003E\u003Cp\u003EPerhaps one of the most contentious emerging uses of AI is in predictive policing, where algorithms are trained on historical sets of data to pick out patterns in offender behaviour and characteristics. This can then be used to predict areas, groups or even individuals that might be involved in crimes in the future. Similar technology is already being trialled in some parts of the US and the UK.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EBiases\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EBut these algorithms can also learn the biases that already exist in data sets. If a police database shows that mainly young, black men are arrested for a certain crime, it may not be a fair reflection of the actual offender profile and instead reflect historic racism within a force. Using AI taught on this kind of data could exacerbate problems such as racism and other forms of discrimination.\u003C\/p\u003E\u003Cp\u003E\u2018Transparency of these algorithms is also a problem,\u2019 said Prof. Stahl. \u2018These algorithms do statistical classification of data in a way that makes it almost impossible to see how exactly that happened.\u2019 This raises important questions about how legal systems, for example, can remain fair and just if they start to rely upon opaque \u2018black box\u2019 AI algorithms to inform sentencing decisions or judgements about a person\u2019s guilt.\u003C\/p\u003E\u003Cp\u003EThe next step for the project will be to look at potential interventions that can be used to address some of these issues. It will look at where guidelines can help ensure AI researchers build fairness into their algorithms, where new laws can govern their use and if a regulator can keep negative aspects of the technology in check.\u003C\/p\u003E\u003Cp\u003EBut one of the problems many governments and regulators face is keeping up with the fast pace of change in new technologies like AI, according to Professor Philip Brey, who studies the philosophy of technology at the University of Twente, in the Netherlands.\u003C\/p\u003E\u003Cp\u003E\u2018Most people today don\u2019t understand the technology because it is very complex, opaque and fast moving,\u2019 he said. \u2018For that reason it is hard to anticipate and assess the impacts on society, and to have adequate regulatory and legislative responses to that. Policy is usually significantly behind.\u2019\u003C\/p\u003E\u003Cp\u003EProf. Brey is coordinator of the \u003Ca href=\u0022https:\/\/cordis.europa.eu\/project\/rcn\/210254\/factsheet\/en\u0022 target=\u0022_blank\u0022 rel=\u0022noopener noreferrer\u0022\u003ESIENNA\u003C\/a\u003E project, which is developing recommendations and codes of conduct for a range of emerging technologies, including human genomics, human enhancement, AI and robotics.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EMining\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u2018Information Technology has, of course, already had a major impact on privacy through the internet and the mobile devices we use, but artificial intelligence is capable of combining different types of information and mining them in a way that reveals fundamentally new information and insights about people,\u2019 said Prof. Brey. \u2018It can do this in a very fast and efficient way.\u2019\u003C\/p\u003E\u003Cp\u003E\u003Cblockquote class=\u0022tw-text-center tw-text-blue tw-font-bold tw-text-2xl lg:tw-w-1\/2 tw-border-2 tw-border-blue tw-p-12 tw-my-8 lg:tw-m-12 lg:tw--ml-16 tw-float-left\u0022\u003E\n \u003Cspan class=\u0022tw-text-5xl tw-rotate-180\u0022\u003E\u201c\u003C\/span\u003E\n \u003Cp class=\u0022tw-font-serif tw-italic\u0022\u003E\u2018Most people today don\u2019t understand the technology because it is very complex, opaque and fast moving.\u2019\u003C\/p\u003E\n \u003Cfooter\u003E\n \u003Ccite class=\u0022tw-not-italic tw-font-normal tw-text-sm tw-text-black\u0022\u003EProf. Philip Brey, University of Twente, the Netherlands.\u003C\/cite\u003E\n \u003C\/footer\u003E\n\u003C\/blockquote\u003E\n\u003C\/p\u003E\u003Cp\u003EAI technology is opening the door to real time analysis of people\u2019s behaviour, emotions along with the ability to infer details about their mental state or their intentions.\u003C\/p\u003E\u003Cp\u003E\u2018That\u0027s something that wasn\u0027t previously possible,\u2019 said Prof. Brey. \u2018Then what you do with this information raises new kinds of concerns about privacy.\u2019\u003C\/p\u003E\u003Cp\u003EThe SIENNA team are conducting workshops, consultations with experts and public opinion surveys that aim to identify the concerns of citizens in 11 countries. They are now preparing to draw up a set of recommendations that those working with AI and other technologies can turn into standards that will ensure ethical and human rights considerations are hardwired in at the design stage.\u003C\/p\u003E\u003Cp\u003EThis wider public understanding of how the technology is likely to impact them could be crucial to AI\u2019s survival in the longer term, according to Prof. Stahl.\u003C\/p\u003E\u003Cp\u003E\u2018If we don\u0027t get the ethics right, then people are going to refuse to use it and that will annihilate any technical progress,\u2019 he said.\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EThe research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.\u003C\/em\u003E\u003C\/p\u003E\u003C\/textarea\u003E\n\u003C\/div\u003E\n\n \u003Cdiv id=\u0022edit-body-content--description\u0022 class=\u0022ecl-help-block description\u0022\u003E\n Please copy the above code and embed it onto your website to republish.\n \u003C\/div\u003E\n \u003C\/div\u003E\n\u003Cinput autocomplete=\u0022off\u0022 data-drupal-selector=\u0022form-nkgzmkayogscfmf8xtz08bw8p04aba2-xkqdvsdm-yc\u0022 type=\u0022hidden\u0022 name=\u0022form_build_id\u0022 value=\u0022form-nKgzMkaYOGscfmf8xTZ08BW8P04aBA2_xKQdVsDM_yc\u0022 \/\u003E\n\u003Cinput data-drupal-selector=\u0022edit-modal-form-example-modal-form\u0022 type=\u0022hidden\u0022 name=\u0022form_id\u0022 value=\u0022modal_form_example_modal_form\u0022 \/\u003E\n\u003C\/form\u003E\n\u003C\/div\u003E","dialogOptions":{"width":"800","modal":true,"title":"Republish this content"}}]