“At first glance, the goal seems simple enough—make an AI that behaves in a way that is ethically responsible; however, it’s far more complicated than it initially seems, as there are an amazing amount of factors that come into play. As Conitzer’s project outlines, “moral judgments are affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems.””
The Evolution of AI: Can Morality be Programmed? – FLI – Future of Life Institute
https://futureoflife.org/2016/07/06/evolution-of-ai/
via Instapaper