Navigation auf uzh.ch
Markus Christen: There are countless different predictions. But what they all agree on is that AI will change our lives. What this actually means is much less clear. We have to be aware that the term artificial intelligence covers very different technologies and applications – from industrial production to chatbots. In this sense, the term is very vague. In the TA Swiss study (see box), we mainly looked at automated decision-making using AI systems. These systems are used to process loan applications, to select job applicants and in self-driving vehicles, for example. The question is what impact it will have on our lives if AI helps us make decisions or makes them for us.
Christen: The trend toward data-driven decision-making is very widespread. It is based on the ideal of rational, objective and fair judgment, as opposed to the sometimes irrational and prejudiced decisions we humans tend to make. The question of whether purely rational decisions are also humane is debatable. If, for example, loan decisions in future are only made using AI based on rational criteria, this also narrows the options, which could be a problem. While humans do consistently make wrong decisions, this is not necessarily a bad thing for the overall system. As we know from science, sometimes taking the wrong path can unexpectedly lead us to the right destination.
Christen: To me, the fear that we will lose control and that machines will decide everything in future is exaggerated. Because ultimately, AI systems are designed by us – they are our design decisions. The systems can't do this on their own, as they lack the required consciousness. What’s more, unlike us, they do not have desires. We deploy AI according to our desires, because it can complete certain tasks better than we can. Finally, even in the most autonomous cases, AI needs to be reviewed constantly. We need to continuously test whether the system is actually doing what it is designed to. This is a job that will always need to be performed by a human. A banker who relies on the recommendations of AI to make mortgage decisions must ultimately know when they can trust the system and when they can’t.
Christen: Transparency is paramount when it comes to artificial intelligence, which is why research in the field of explainable AI is essential. Incidentally, for some AI applications there should be a duty to state reasons. For example, the state can impose rules and obligations on citizens and must be able to justify why and how it is deploying AI in such sovereign tasks.
Christen: Let me give you an example: In 2016, US firm Northpointe’s COMPAS algorithm made headlines. This AI system provides assessments of recidivism risk to US judges who have to decide on the early release of offenders. An investigative journalism organization called ProPublica looked into the way the system works and concluded that its predictions were racially biased. In fact, COMPAS predicted that African Americans were almost twice as likely to re-offend as white inmates, despite the fact that skin color was explicitly excluded as a criterion in the program. ProPublica therefore surmised that the developers were either negligent when writing the algorithm or that they were implicitly racist.
Christen: No, following ProPublica’s revelations, researchers were able to show that the problem is part of the system itself. Fairness issues in AI systems are mathematically inevitable. Because different fairness criteria – and these need to be defined when programming an algorithm – can sometimes be mutually exclusive. AI-based decision-making systems do not entirely eliminate bias, as the eradication of one form of unfairness automatically leads to other forms of unfairness.
Christen: Yes. Philosophers have been grappling with the question of what is fair for centuries. As far back as Ancient Greece, Aristotle established that there are different forms of fairness – or justice. We have to recognize that AI doesn't free us from these problems. We can't evade the question of which type of fairness is relevant. Through a new research project that our team has launched with colleagues from other universities, we want to raise awareness of fairness issues in the software community. The project aims to develop tools that teach developers in a fun way that building intelligent algorithms is not only about the IT aspects, but also about fairness.
Christen: Systems like this can perhaps make us aware of our own preconceptions and systematic errors. AI can’t make decisions for us, but it can make recommendations by offering a sort of second opinion. Fundamentally, I believe that AI has the potential to make us wiser. Precisely because it works very differently to us, and because it can pool and process huge amounts of data that we don’t have, it gives us the opportunity to hold a mirror up to ourselves. This is a positive thing, particularly when it comes to complex decisions. The crucial factor is that there must always be human-machine interaction. Machines that decide autonomously are not desirable.
Christen: Transparency and clear rules are crucial to positive dealings with artificial intelligence. Deployed effectively, AI could become a valuable companion in our thinking that acts like a good friend who can suggest things we wouldn’t have thought of ourselves. Together we can become smarter.