7 AI Traps to consider (via Freedom to Tinker)
The writer, Dr Annette Zimmermann, criticises some overly simplistic memes and one-liners about AI Ethics. The seven traps identified in the article are: (1) the reductionism trap, (2) the simplicity trap, (3) the relativism trap, (4) the value alignment trap, (5) the dichotomy trap, (6) the myopia trap, and (7) the rule of law trap.
Each of the traps is well described, and supported by some excellent quotes:
“as philosopher Daniel Dennett argues in a recent piece in Wired, “AI in its current manifestations is parasitic on human intelligence. It quite indiscriminately gorges on whatever has been produced by human creators and extracts the patterns to be found there—including some of our most pernicious habits. These machines do not (yet) have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals”.
“As it turns out, the pursuit of AI Ethics—even in its ‘weak’ form—is subject to a range of possible pitfalls. Many of the current discussions on the ethical dimensions of AI systems do not actively include ethicists, nor do they include experts working in relevant adjacent disciplines, such as political and legal philosophers. Therefore, a number of inaccurate assumptions about the nature of ethics have permeated the public debate, which leads to several flawed assessments of why and how ethical reasoning is important for evaluating the larger social impact of AI.”
We won't detail here all 7 traps – the article is self-sufficient – but as a teaser, here is The Rule of Law Trap:
“Ethics is essentially the same as the rule of law. When we lack appropriate legal categories for the governance of AI, ethics is a good substitute. And when we do have sufficient legal frameworks, we don’t need to think about ethics.”
Really good stuff.
Guest post by The Futures Agency content curator Petervan