White Paper: The Ethical Implications of AI

Share Article

Throughout this paper RE•WORK explore areas such as bias, privacy and security, moral machines and decision making as well as looking at real world examples of how AI can be used for good. Download the complimentary paper.

News Image
Ethics is what you apply to decisions to ensure you’re doing your best not to harm others. This means questioning each feature and design choice - are we doing this because it’s ‘cool’ or because it will help someone? (Andrew McStay, Bangor University)

Artificial Intelligence is already impacting our lives on a daily basis from the way we interact with technology on a personal level, to the operations of businesses across the globe. As research progresses this level of automation will only to increase. Whilst mass implementation of AI promises to improve healthcare, education, research, transport, sustainability and countless other industries, it’s essential that corners are not cut to achieve the end goal.

Ethical issues relate not only to ensuring that machines have their best interest of humans at the centre of their decisions, but also how we arrive at these decisions and who ultimately is responsible. So how should we create rules?:

  • The ACM’s code of ethics makes an attempt to look at a broader picture of computing professionalism and ethical development that has AI as a part of that. (Catherine Flick, De Montfort University)
  • The UK Lord’s Report has proposed a national and international "AI code” to ensure the country becomes "a world leader" in the industry's application of machine learning. (Lord’s Report: Artificial Intelligence Committee, 2018)
  • In 2016, the White House released a report, ‘Preparing for the Future of Artificial Intelligence’, highlighting existing and potential applications of AI, focusing on how AI can be used for good. (2016)
  • Accenture have created an ethical framework highlighting that AI must be created in the right environment, with ethics considered throughout the whole process with developers understanding the impact at every stage. ‘For the digital economy to flourish, trust needs to be baked into the system at every level’. (Accenture, 2017)
  • The EU Commission are aiming to ‘take the lead to shape the ethics of AI’ with three main goals: Boosting "the EU's technological and industrial capacity and AI uptake across the economy"; make sure there is "an appropriate ethical and legal framework" based on EU values; and "Prepare for socio-economic changes“. (EUobserver, 2018)

Tractica have forecasted a growth rate for 51% in the increase of AI software, hardware & services from 2017 to 2018 - an increase from $17.5bn to $26.4bn. Whilst promising, can be concerning if handled incorrectly. The idea of creating machines that are able to independently make decisions raises numerous ethical issues. If these issues are not considered from the outset, the results could be detrimental. (Tractica, 2018)

'With AI technologies, ethics raises questions about dignity, privacy, bias, consent, transparency of machine decisions, predictability of AI behaviour, and many mire. This then introduces countless questions, not least about automation, and filling human roles with algorithmic “entities.”(Andrew McStay, Bangor University)

When creating AI, whilst the team behind the product approach the process with good intentions, there are still many ways in which the final product could be unethical, Kate Crawford, Co-Founder of AI Now explains that ‘often when we talk about ethics, we forget to talk about power. We’re seeing a lack of thinking about how real power asymmetrics are affecting different communities’. If the team behind an algorithm are not from a diverse background, the AI will be laden with unintentional biases.’
(Wired, 2017)

As the applications of AI increase, more and more people are creating these systems. Regardless of whether the creators have the ‘right’ ethical practices in place, bias comes under scrutiny time and time again.

Every single judgement we make is value laden. We are inherently tied to our own experiences and beliefs, and there’s no way to remove this from ourselves. This means that even if we believe our opinion to be neutral, 'nothing we do can ever be neutral as we’re incapable of eliminating our own bias, so that affects every piece of technology we make.’ (Yasemin J. Erden, St Mary’s University)

Ideally, intelligent systems and their algorithms should be objective, and it is a common misconception that because they rely on mathematical computations this is the case. However, the training data that the machine learns from comes from human programmers.

Bias is an unfair slant towards one or more groups of people. Algorithms are biased when built on biased data sets. For example, the early days of speech recognition built models based mainly on samples from white male speakers, Meaning speech recognition did not work as well for women. (Cathy Pearl, Sense.ly)

Download the complimentary white paper now and read more.

Share article on social media or email:

View article via:

Pdf Print

Contact Author

Yazmin How
RE•WORK
+44 2032891104
Email >
@teamrework
Follow >
Follow us on
Visit website