Artificial Intelligence - Ethics & Challenges

Artificial Intelligence (AI): a live WIRE

This week marks a turning point in global media history. AI news anchors. Chinese state-run press agency Xinhua this week launched their AI news anchor 'Qiu Hao' - wearing a tie and a pin-striped suit. The news anchor can now work 'tirelessly', 24hrs a day, 365 days a year, with a continuous media cycle of 'new news'. This new anchor was developed through machine learning to 'simulate the voice, facial movements, and gestures of real-life broadcasters' (Guardian, 2018). 

Ethical and moral questions surrounding AI

AI is defined as ‘an area of computer science that emphasises the creation of intelligent machines that work and react like humans’ (Techopedia, 2018). Activities these machines are designed for include speech recognition, problem solving, data pattern matching, and logical learning and planning. 

At the November 2018 Graduate Impact Cross-Current meetings in Berlin, Germany, eminent technologist and AI speaker Jeremy Peckham spoke on the moral and ethical implications of Artificial Intelligence. 

Understanding AI

Artificial Intelligence is often introduced as a topic of great hype. The future is here now. The world is changing beyond belief. And amidst the hype, a concrete grip on common sense can be loosened. Yet the rapid rise of AI tech is indeed an alarming trend. In the years 2012-2018, over $15bn USD have been invested globally in AI firms, spread across 2,250 companies. This hype is concentrated in the state of California, USA - or what is often referred to as Silicon Valley - which has risen to the status of the 6th largest economy in the world. Silicon Valley is not alone though - many other clusters have arisen, from Silicon Alley (NY), to Silicon Roundabout (London), and even the Silicon Fen (Cambridge). 

The language accompanying this rise has also been 'humanised' - or anthropomorphologised - to bring a 'human' element to the incredible capacity and efficacy of these computer systems. AI has largely moved away from the classic 'symbolic' and 'expert system' approaches of the 1980s to mathematical models mostly based on Multilayer Neural Networks with terms like ‘machine’ or ‘deep learning’ being used to describe the algorithms used. AI advocates speak of 'deep' & 'machine learning', along with 'neural networks, 'capsule networks', & 'smart algorithms'. 

Copies-37.jpg

Capabilities and Challenges of AI

A fundamental reality challenging the rise of AI is the clue in the title. Intelligence. Defining intelligence is an incredibly difficult task. Broadly, it is understood as 'achieving goals in a wide range of environments', though at present this is limited to ANI (Artificial Narrow Intelligence) - performing tasks such as pattern recognition, logical matches, data sweeps, and language processing. These are mostly stochastic processes, however, rather than 'intelligent' actions. Much debate surrounds this, though, and the existential questions that arise from narrowly defining 'intelligence' are significant. 

The three main contemporary fields of ANI are divided into robotics (e.g. humanoids), simulated 'humanness' (e.g. sensors), and virtual reality (relying on haptic feedback). At present, ANI is incredibly effective in sorting and sweeping large data sets. However, quantity does not guarantee quality. 'Neural networks', for example, claim to mimic the complexity of neuron connections in our brains - though they are still far off. If this form of AI recognises two eyes, a nose, and a mouth, it will correctly identify the subject as a face. However, if the mouth is on the forehead, and other organs are misplaced, it will still identify the subject as a face. The work of 'capsule networks' is attempting to overcome some of these challenges, and sort both for matching (demanding speed) and filtering (demanding accuracy). 

Beyond ANI, two other forms of AI are important to consider. The first is AGI - or Artificial General Intelligence - defined as the point where computer intelligence becomes as good or better than human intelligence across the board. This theory is forwarded by technologists such as Ray Kurzweil and his 'Singularity' thesis, purporting that AGI will be achieved by the year 2045. The second is ASI - or Artificial Super Intelligence - and posits that AI will become much more intelligent than humans, in every way. This poses significant ethical & moral challenges. 

The ethics of AI

The ethical questions raised by Artificial Intelligence are enormous - and largely sit in unchartered territory. Only very recently have commentators - and amongst those, few of the christian tradition - started asking profound ethical questions. Whilst broad legislation is not yet substantively happening,  many public (e.g. European Commission) and professional bodies (e.g. the IEEE) are debating standards that should be applied and what legislation or regulation might be required.:

Four key strands of the ethical debate include: technological risks (hacking, bugs, biases), decision making risks (who is in charge, what integrity is there), economic risks (automation of jobs, mass unemployment), and risks to human personhood (who is servant, who is master). Together, these come together to create 'existential questions' of particular severity. Commentators fear the questions of meaning and purpose that may arise in a significant way as AI entrenchment continues. 

The question of how to approach the questions above is a significant one. From a christian perspective, we are called to 'understand the times' (1 Chronicles 12, Luke 14v28), and in our present era, and times to come, AI may dominate some of this discourse. We would be wise to be well prepared. 

Jeremy Peckham outlined three ways in which people approach these ethical questions. The first is the deontological (or duty) - what rules, and what moral codes do we abide by? The second is the teleological (or purpose) - what outcome are we looking for, what good are we desiring? Finally, the third lens is that of virtue (or character) - what do we deem to be good character? These three areas help us sharpen not only our thinking, but also the questions we ask of AI tech. What rights do AI artefacts have? How are we to think through human personhood in view of the rise of AI? What is the ultimate good, or noble purpose, we believe mankind should pursue? 

Many who have a naturalistic worldview would advocate a 'teleological' or 'utilitarian' view that would potentially mean that it’s okay to kill someone for a greater number to survive (e.g. in deciding what rules an autonomous vehicle should have). In such AI debates, the christian worldview often contrasts starkly against that of materialists, utilitarians, naturalists, and those adhering to a narrative of scientism. Christians must be well equipped to defend the credibility of the christian position, and argue also for its validity in the present era. 

DSC01443.jpg

A Christian Manifesto on AI

Jeremy suggested nine propositions for christians to consider in regard to the rapid rise of AI; 

1.  The Creation Mandate - 'humans should always make final decisions and be in control of AI where principles of justice & righteousness are at stake' (cf. Isaiah 56v1) 

2.  Service of Humanity - 'AI should serve humans, in their very design, and should not be assigned personhood or moral agency' (cf. Romans 12v2) 

3.  AI is an Artefact - 'AI has no intrinsic ethical or moral rights, because AI is an artefact made and used by man' (Genesis 1v26) 

4.  AI to support the Creation Mandate - 'in design, AI should help us to love our neighbour, to do justice and righteousness' (Luke 10v27) 

5.  AI Disclosure - 'people should always know if they are interacting with a robot, a chatbot, an android, or an AI artefact' (Proverbs 24v28, 10v9, Psalm 52v2) 

6.  AI Designers to be Accountable - 'AI designers, manufacturers & users should always be held to account for the results, impacts, and consequences of the AI they develop' 

7.  Risks of AI - 'AI may dehumanise civilisation if we rely more on AI and allow it to rule' (Gal 5v6) 

8.  Idolatry of AI - 'Giving up responsibility to an AI machine diminishes our personhood, and this amounts to idolatry' 

9.  Super-intelligence - 'seeking to pursue and create super-intelligence is to seek to become God' (Daniel 4v28) 

About the speaker: Jeremy Peckham

Jeremy Peckham began his career as a government scientist at the UK Royal Aircraft Establishment and later moved to Logica, an international software and systems integration company. He founded his own speech recognition company in 1993 through a management buy out from Logica and launched a successful public offering on the London Stock exchange in 1996.  Most of Jeremy’s career was spent in the field of AI. Jeremy is now a successful serial entrepreneur having invested in and helped to establish several high tech companies over the last fifteen years where he has served as founder, CEO, Chairman or non executive director. Apart from his business activities Jeremy devotes part of his time to Christian mission through a charitable foundation that he established in 1998. He and his wife run Africa Rural Trainers, a bible training and sustainability programme for rural pastors in Kenya. He is a Fellow of The Royal Society of Arts and 1st class honours graduate in Applied Science.

References
For highlights of the talks presented by Jeremy Peckham, please visit:  https://vimeo.com/user20882891

For a recommended blog on AI, visit:  http://www.jubilee-centre.org/thinking-critically-about-ai-blog/

Guardian (2018) https://www.theguardian.com/world/2018/nov/09/worlds-first-ai-news-anchor-unveiled-in-china

Jubilee (2018) http://www.jubilee-centre.org/thinking-critically-about-ai/

Techopedia (2018) https://www.techopedia.com/definition/190/artificial-intelligence-ai

Report by Samuel Johns

Miriam OwenComment