When machines have an IQ of 50,000, what happens to human values and ethics?
By Gerd Leonhard Futurist , Author Technology vs Humanity
The partnership on AI has just announced some new members and a board of trustees, and Apple has finally joined Facebook, Google / Deep Mind, IBM, Amazon and Microsoft. This development has prompted me to update my initial open letter (published via Wired.co.uk on October 26, 2016) and to clarify my key messages to the Partnership. (In the interest of full disclosure please note that I have given keynote speeches and talks for almost all of the companies addressed in this open letter, during the past 10 years)
Dear board of trustees and member companies at The Partnership on AI to benefit people and society,
The ‘Partnership on AI to benefit People and Society’ is a welcome change from the technology industry's celebration of disruption and ‘digital transformation'. But will your initiative also foster a serious discussion about the ethics of the digital age, and about how far we should go in this coming convergence of (wo)man and machine?
Data is already the new oil, clearly, yet the explosive progress in AI, deep learning and cognitive computing is certain to vastly boost the power of those that hold all that data in their ‘global brains’ – and to literally unimaginable proportions. But unlike with the giants of the fossil-fuel era such as BP or Exxon-Mobile, there is little oversight on what exactly you and your peers can and will do with this power, and what rules you’ll need to follow once you have built that omnipotent AI-in-the-sky, embodied by bots and machines – and who makes the rules, to start with. Will this partnership manifest in foresightful stewardship, and does it mean your companies are finally ready to accept responsibility for the consequences of these humanity-changing inventions? Will you strive for a better mix of precaution and pro-action, one that puts human flourishing first? Will you consider foregoing some serious profits because some magic new project might have too many negative (even if unintended) consequences?
In a not-too-far-away world where machines have an IQ of 50,000 and the Internet of Things encompass 500 billion devices, what will happen with those social contracts, values and ethics that underpin human essentials such as self-determination, privacy and free will? Will significant human limitations such as ageing or even death soon be up for discussion as technology goes into warp-drive?
It seems clear to me that the question is no longer about if technology can do something, but why. Who gets to decide this? Who is ‘mission control for humanity’?
Multiple paradigm shifts are changing society at warp speed, and your organizations are in the eye of the storm caused by what I term the Megashifts: digitization, cognification, automation, disintermediation and virtualization, to name only the most prominent. These are exponential and combinatorial game changers that transform multiple domains simultaneously, and they may lead us to a very bright future (imagine defeating cancer or achieving abundant energy or water) – yet proceeding without a global framework of digital ethics could also create a special kind of hell.
Worryingly, digital ethics currently fares no better than corporate sustainability (CSR) as far as the agenda of Silicon Valley and Big Tech is concerned. The default paradigm is still, as Bert Brecht put it, ‘dinner first, then morals’. This is clearly unsustainable.
Granted, ethics are culturally relative, but certain universals are self-evident, like having the ability to continue existing, or striving for happiness. And sure, profit and growth is a critical element in most civilizations, and societies such as the Roman Empire that lost their profit base quickly withered. But what if your next phase of evolution not only afforded you a digital ethics code – but required one? Might the next big corporate narrative be the mainstreaming of ethical digital behavior? Let’s not wait for intelligent machines with IQs of 50,000 before we get these ethical dilemmas sorted.
Here are four essential points I want to present for discussion:
1) We are at the pivot point of the exponential curve – things will soon become unimaginably different, because of AI and ‘thinking machines'. This is a huge opportunity for you and your companies to embrace a new kind of stewardship for humanity, and a more holistic approach to human flourishing based on accepting your new AI-centric responsibilities would be novel. A fuzzy area, no doubt, but is that not where things usually begin?
2) Technology is not our purpose – it’s our toolkit, our method. Technology should not be what we seek but how we seek. Humans are toolmakers, not tool-made – and I believe we should keep it that way. So will your AI innovations cause us to abdicate our thinking and our humanity – or will they serve to truly advance human flourishing? How do you intend to ensure that?
3) We clearly need to embrace technology but personally I believe we should not become it. What do you believe? Do you believe humanity is headed towards a total symbiosis with technology, i.e. that we will soon become incapable of existing without augmenting ourselves? Is the (wo)man – machine convergence inevitable? What is your position on The Singularity and so-called transhumanism?
4) Every great algorithm also needs a corresponding human metric – what I call androrithms, a balancing factor to protect and further that which makes us uniquely human. Maybe the question is no longer just what can be automated, but also what should not be automated or robotized.
The Partnership for AI is a very promising concept. Let’s now turn digital ethics from a fuzzy grey zone into a global bill of rights.
Sincerely, Gerd Leonhard
Find out more about my new book ‘Technology vs Humanity', read the TVH cheat sheet, buy the book on Amazon (this link automatically takes you to your local amazon site), or order it directly from my publisher (all formats; bulk orders at a discount)
My recent video on the Global Brain