New: Try my AI Bot new film

Is AI a danger for Humanity? Well… it’s complicated!

In an interesting display of “Yes, No, It's complicated”, three Indian researchers reflect whether AI is a danger for humanity.

In the YES-camp, the key argument evolves around the almost exclusive focus of AI on one particular aspect of human beings: the calculative capacity. The result are (artificial) intelligences at the service of corporations with only one ambition: profit.

“The success of these machines only reinforces the success of a particular view of human beings: not their vulnerability and finitude (characteristics that have catalysed so much of great music, art and literature), but largely some calculative capacity.

The NO-camp assumes that AI has no agency and that technology is not dangerous in itself. The fault is in humans who (mis)-use the technology, and technology will always be under our control and so we can literally pull the plug when we want.

Technology inherently does not have agency. Its interaction with us and the life we give it gives it agency. Whether we use AI to augment ourselves, create new species, or use it to destroy lives and what we’ve built is entirely in our hands — at least for now.

Obviously this is not a black and white discussion, hence the third chapter “It's complicated”, calling for a debate on AI (and technology in general) and ethics. Corporations and regulators need to employ ethicists. 

“Regulators across the world need to be working closely with these academics and citizens’ groups to put brakes on both the harmful uses and effects of AI.”

 

Gerd Leonhard has suggested the need for a Global Digital Ethics Council – find out more via the links below and of course in Gerd's book Technology vs. Humanity 

The Alternative

Digital Ethics Council Gerd Leonhard and Tim Renner

We need to talk about AI!

 

This is a guest post by TFA's curator PeterVan

4872

Views


Tags

newsletter

* indicates required
latest book