Trust in artificial intelligence

Digital transformation Three wrong questions about trust and AI

Published on 23.09.2020, Marisa Tschopp

No trust, no use: trust is often put forward as a critical success factor when it comes to the acceptance and use of new technologies. But it’s not as simple as that. Does the use and acceptance of new technologies really go hand in hand with trust? To implement trust and AI more successfully, you first need one thing: better questions.

The use of ubiquitous smart technologies is increasing tensions between humans, machines and society. Passionate debates are held in which the extremes of the-end-is-nigh Nostradamus followers and fervent tech evangelists seem to hold sway.

Uncertainty and scepticism are growing, which inevitably puts the spotlight on trust: how can we convince consumers to trust us? It’s a clever diversionary manoeuvre to distract from the weaknesses of your own company culture or the quality of your products. Real-world use of the word trust is soaring in design guidelines, advertising, image campaigns and the codes of ethics of tech firms, banks and other AI start-ups. But it mostly serves as a meaningless filler word that is intended to evoke some positive connotations. It could also be called “trust-washing”.

It’s high time to clear away the myths, speak plainly and stop asking the wrong questions:

1. Do you [the user] trust AI?

The question of whether someone trusts AI, or to what extent, is in fact completely pointless. The trust issue always has three dimensions: who trusts, who is trusted, and what the goal of this trust is. For example: I trust that Amazon will deliver my order promptly. But I don’t trust Amazon to use my personal data “ethically”, or that it won’t misuse it for marketing purposes and analyse me using questionable “psychographic” means.

A better question would be: do you trust this [AI-based product] to achieve objective X? 

2. How can we [the tech company] increase trust in AI?

The folks in marketing and sales departments will be clamouring to work out how to control, influence or manipulate the consumer so that trust in AI product X and in turn the likelihood of adoption is increased. In this respect, a clear demarcation and a change in focus are needed: trust and trustworthiness are fundamentally different concepts. Trust by consumers is a mindset, whilst trustworthiness is a property of products, processes or a company. Guidelines for working on these aspects are popping up en masse. It’s clear that trust cannot be bought; it needs to be earned by demonstrating that you’re worthy of trust.

A better question would be: how can we be trustworthy?

3. Should we [the society] trust AI?

Never - as J. Bryson would say. AI-based programs are not about trust. Software needs to be trustworthy, i.e. built in such a way that its developers can be held accountable. This means we need to know and verify what a particular system is capable of and what it is not. Trust is irrelevant, as it is with bookkeeping. From an ethical perspective, the question is definitely misplaced.

A better question would be: how can we better understand AI?

Countless psychological research groups are rightly working to decode the mystery of trust and technology: how does trust influence the way in which we place our trust in technology and use it? What role do other factors play, such as the understanding of AI or perceived sense of agency in user behaviour? Fatal accidents have been documented where people had too much or too little trust in technology: e.g. the well-known “death by GPS” phenomenon or the engineer who had an accident in a Tesla because he trusted the system 100% to take him to his destination without any user input. This led him to think it was safe to play video games during the trip.

To summarize, we need a nuanced view of the situation and to take a transdisciplinary path that integrates science, practice, politics and other stakeholders. It’s high time that we ask the right questions and jointly discuss them.

 

Unfortunately, Connecta cannot be held as planned. Marisa Tschopp would have been one of the 80 speakers. An alternative programme is available through Connecta TV, Doc and Talk – find out more at: www.swisspost.ch/connecta.

Marisa Tschopp

Marisa Tschopp is a researcher at scip AG, a cybersecurity and tech company in Zurich. She completed a master’s degree in business psychology at the Ludwig Maximilian University in Munich. She is active in research on Artificial Intelligence from a humanistic perspective, focusing on psychological and ethical aspects. She has given talks at TEDx events and has also represented Switzerland as an ambassador in the Women in AI (WAI) initiative.

((commentsAmount)) Comments

There was an error during request.
  • (( comment.firstname )) (( comment.lastname )) (( comment.published )) (( comment.content ))

Contact us

Do you have questions for our experts, or do you need advice? We will be only too happy to help!

Contact us