New: Try my AI Bot new film

Why the Singularity is certain to happen in my own lifetime, and how it matters (Futurist/Humanist Gerd Leonhard)

I recently came to an important realisation: I will most likely see the so-called Singularity happen in my own lifetime. I’m 56, and I believe that this inflection point at which computers, ‘thinking machines’ and AI become infinitely and recursively powerful is no more than 20-25 years away, at most – and it might be as soon as 12-15 years from today.

We are at ‘4’ on the exponential curve, and yes, this matters a lot because doubling a small number such as 0.01 does not make much of a difference while doubling 4-8-16-32-64 is another story altogether: timing is essential, and the future is bound to increasingly happen gradually, then suddenly.

We will without a doubt encounter almost infinite machine intelligence in the near future. 

So what does ‘the singularity’ actually mean? Here’s a suitable definition, imho: “In maths or physics, the singularity is the point at which a function takes an infinite value because it’s incomprehensibly large. The technological singularity, as it is called, is the moment when artificial intelligence takes off into ‘artificial superintelligence’ and becomes exponentially more intelligent more quickly” (via Metro.co.uk).  In my new book Technology vs Humanity (which zooms in on exactly these topics, incidentally … and yes you are welcome to hurry and buy a copy :), I condense the singularity definition even further by stating that the Singularity is the point in time when ‘thinking machines’ become as powerful as human brains, “the moment when computers finally trump and then quickly surpass human brains in computing power”.

IA, Narrow AI and AGI.  Right now, most artificially intelligent machines work well for ‘narrow’ use cases, such as in mapping apps or for the very useful intelligent response options in GMail that debuted a few months ago. Most current ‘AI’-related applications (such as intelligent assistants like IPSoft's Amelia) are more like ‘intelligent assistance' (IA, rather than AI) with fancy user interfaces, or hugely scaleable narrowly intelligent software paired with brute-force hardware that can yield truly astounding results such as Google / Deepmind’s AlphaGo or IBM Watson Analytics. Yet these very useful machines are still narrow in the sense that they generally cannot transfer their learnings and abilities to other tasks (Deepmind’s AlphaGo only plays GO, not Poker or even chess), and they certainly cannot apply their ‘intelligence’ to completely unrelated areas such as global warming, cancer treatments, running NATO air-traffic or to solving macro-economic issues. Human intelligence is quite the opposite, of course – all our learnings and experiences are somehow transferrable, interrelated and interdependent; our intelligence isn’t narrow, it’s general. Furthermore, our prime mode of operation is not efficiency, it's inefficiency; it's not algorithms and data, it's what I call androrithms i.e. feelings, emotions… non-data.

Are humans really computable? Let's consider these quotes about humans and intelligence: scientist and AI pioneer Marvin Minsky liked to say that “human minds are societies of minds; we run on ecosystems of thinking”. The Nobel-prize winning psychologist Daniel Kahneman puts forth that “cognition is embodied, we think with the body not the brain”, and the Austro-Hungarian researcher Michael Polanyi popularised the paradox that “we (humans) know more than we can tell (and we cannot automate what we don't know)”.  There clearly is something about us humans that makes us extremely difficult to compute – in fact, in my keynotes I state that ‘humanity isn’t computable’ (and yes, sure, I am fully aware that this puts me solidly on collision-course with some of my Silicon Valley futurist colleagues). I'd love your take on this:)

The lid is coming off. Here is the thing: literally everything that is currently limiting the velocity of this exponential curve – and hence the power of the machines – is about to removed in the next 2 decades. The next 20 years will bring more changes than the past 300 years, and what I call HellVen challenges will explode exponentially as well: life could be amazing (if we govern technology wisely) or it could be utterly inhuman (if we empower technology endlessly, unwisely and unethically). We are at this junction, today, and we need to realise that the capabilities of our current, narrow AI are just about to receive some serious boosts, speeding up the arrival of Artificial General Intelligence (AGI) and some would argue, very soon after that, Artificial Super Intelligence (ASI).

Here are the key accelerators of AGI (and generally, the Singularity):

1) Hardware: quantum computing is making huge leaps and will soon no longer be ‘pie in the sky'. In 10 years, we are likely to see machines that will be 1 Million times as powerful as the average computer we use today, and they will consume a lot less power, as well. Most importantly, since there will be no ‘hard AI' or AGI without machines that can crunch algorithms many million times as fast as today’s fastest machines, this is a crucial prerequisite for reaching the singularity. On top of this fast progress in both 3D and quantum computing, many recent advances in nanotechnology and material sciences are also making it very likely that we can, within a decade, mass-produce mobile computing devices (including robots and drones) with significantly less rare-earth, natural minerals and precious metals – this will lower the price dramatically and lead to a massive usage boost of all kinds of gadgets and connected devices, around the globe.

Read more: “Massive disruption is coming by quantum computing” (SingularityHub) “Quantum computing might be here sooner than you think” (Bloomberg). “Quantum computing is coming for your data” (Wired). “Quantum computing is becoming more accessible” (Scientific American). What is Nanotechnology?   “The rise of AI Is Forcing Google and Microsoft to Become Chipmakers” (Wired)

 

2) Networks: extremely fast, low-cost and massively powerful mobile networks (5G, LTE and beyond) will become the new normal in the next 7 years. A general boom in hyper-connectivity – also propelled by fast-growing IoT/IoE / sensor networks (what I like to call ‘smart everything') – will lead to kind of ‘gigabit society’: always-on, always-connected, always-tracking and always-recording tech will be everywhere. This tsunami of connectivity and vastly exponential data will fuel a new meta-intelligence and a global brain of AIs at humanly incomprehensible speed and volume, real-time (watch this video!). A truly global hyper-connectivity using new means of broadband access such as balloons and drones will finally proliferate in the next decade.

I think we can reasonably assume that some 80% of the world will be connected to broadband internet within 10 years (albeit not all at the same high speed, everywhere, of course … inequality will still prove too hard to fix with tech). This is more than twice the number of users today, and it will be one of the main reasons behind the rise of China, India, Indonesia and Africa. ‘Offline’ will become a true luxury.

Read more: “Mobile broadband subscriptions are projected to double in five years” (Recode). “Alexa, Understand Me: Voice-based AI devices could become the primary way we interact with our machines” (TechnologyReview.com)

 

3) Power and batteries. Battery technology and innovation is making huge leaps, as well. The amount of funding that is pouring into new battery technologies, storage and related tech is staggering (as is the VC-money going into AI!). The car / AV / EV and mobility industry is taking the lead here, clearly, but the end result is that 1000s of well-funded startups (and of course all major incumbents, as well) are working very hard on making our batteries cheaper and last much longer, very soon. I fully expect a low-cost EV to travel an easy 1000+ miles before having to charge again, in approx. 5 years, and within the same timeframe I expect that my mobile ‘phone’ / device / bot / assistant / VR kit / implant will run for a full month before I need to plug it in again (and this will of course be impacted by new wireless charging technologies that will replenish the power while a device is being used).

Read more about AVs, EVs and battery tech:  “Carmageddon is coming” (Wired)

Going way beyond Moore's law. I have a strong hunch that technological progress is very likely to no longer merely double every 2 years, but that in some sectors (such as AI) it will double every 12 months, every 6 months… every 6 weeks? If we keep in mind that just 30 steps up the exponential curve, starting at ‘4’, will take us beyond a Billion, i.e. straight into the sky, towards infinite power … we probably should be very excited, and very afraid, as well. Who will be mission control for humanity?  Are we ready for this?

Soon, the question is no longer HOW technology can do something (or what the cost is), but WHY it should and WHO controls it.

Clearly, without a strong global stewardship and human (not tech, or profit/growth)-centric ethical framework we will soon be heading towards a new kind of arms race; and not just in AI but also in human genome manipulation, and geo-engineering. And this is one arms-race we are unlikely to survive as a species. Some potential answers are presented for discussion in my book – I'd love to hear your feedback!

Download the PDF: Why the Singularity is certain to happen in my own lifetime, and why it matters 

More about my new book Technology vs Humanity,  and finally: the brand-new German edition

Some related images from my recent presentations

 

The timing of the future (Frank Diana)

How long before a robot takes your job? Here’s when AI experts think it will happen (WEF)

A short comment on Singularity

My own ethics and principles

SaveSave

SaveSaveSaveSaveSaveSaveSaveSave

SaveSave

SaveSave

SaveSave

16907

Views


Tags

newsletter

    latest book