How can algorithms be trusted?

Approximately 5 minutes reading time

In an earlier blog, we looked at the dangers of algorithms and the way algorithms are developing in the insurance sector . This time we are going to go deeper into the subject of trust in algorithms.

Neutralising trust

A lot of people make a big mental error when it comes to trust. We assume that the more we know about something, the more we can trust it. While in fact it works the other way round. If you know (almost) everything, then you could argue that trust is no longer necessary.

If you trust in something, you basically believe in something without knowing if it is true. In other words, when you trust in something you stop looking for additional information, says researcher Esther Keymolen (article in Dutch). If you have absolute certainty that an action will have a certain effect, then trust no longer has a function. An easy way to illustrate this is when you buy a jar of peanut butter. You put the jar in your shopping basket, blindly trusting that peanut butter will be safe for consumption. In other words: you do your shopping trusting in a world full of uncertainty. In this case, your trust will largely come from your trust in the brand of peanut butter you are buying.

Rebuilding trust

This mechanism is in sharp contrast to the way we often try to build up and regain trust at a social level. Over the last 10 years, we have done everything we can to remove as much uncertainty as possible and transparency has become the magic word in a lot of business plans. After all, the financial services sector was hit with a tsunami of new laws and regulations after the credit crisis of 2008, and for a good reason. The goal: to re-gain public trust. The means: more transparency in the financial world. Transparency is also seen as vitally important in the insurance sector.

"A transparent pension system is unreliable"

Not everyone, though, agrees that transparency actually increases the level of trust. According to the cognitive neuroscientist Victor Lamme – famous for his book Free Will Doesn't Exist – the emphasis on transparency has actually been the worst idea of the last 20 years. His theory is that people are subconsciously seen as untrustworthy if they have to do a lot of explaining to convince somebody they can be trusted. It is just the way our brains are wired.

Lamme has pointed out on numerous occasions how transparency can have a negative impact on our pension system, and that regulatory authorities are actually damaging it by focusing on transparency. Not only does this focus decrease the level of interest in pensions, it also erodes the public backing and support for our pension system.

Simple algorithms

More transparency does not automatically equal more trust. The best thing to do is keep it simple. A major proponent of this theory is Cathy O’Neill – a critic of data analysis and author of a book with a title that says it all: Weapons of Math Destruction – and she now performs audits of algorithms in order to simplify them. At the end of the day, people do not want to be overwhelmed by complicated stories about technological solutions, they want simplicity. It only makes sense to give people a look at the engine that drives the systems – the black box – if they are able to understand the way it works and form a meaningful opinion in simple terms.

Informed trust to get transparency

However, even if you make things easy to understand, you still need to be totally transparent about how the system works. But that in-depth transparency only needs to be given to those who really want or need to know. It won’t increase the level of trust in the general population because they are not able (or willing) to interpret the information in a meaningful way. You buy milk on the basis of ‘blind trust’, and in practice you don't have much choice. In other cases, the situation is totally different. For example, in the case of someone with a professional interest (or a public interest group). Just like when a scientist is thinking about using research carried out by others, but (hopefully) wants to know more about the topic first so they can act on the basis of ‘informed trust’.

We also need to make that distinction when it comes to trust in a smart society full of digital gadgets. It easy for us to become overwhelmed with too much information. For example, when you have to read a licence agreement before you can download a new app or read a long privacy statement before you can use a website. It only makes us irritated, especially if we already distrust the source of that information: the established institutions of business, media, and government. A couple of years ago, the Edelman Trust Barometer revealed a trend that was a perfect ‘sign of the times’: Peers are now as credible as experts (a trusted reference group with shared values, such as colleagues or friends who influence the way you behave) according to the researchers. Although this presents a new challenge, it also opens up the prospect of new ways of building up trust for those who are not afraid to think outside the box.

The Wikipedia model

An inspiring example is the “Wikipedia model", where trust in the reliability of the information has been built up thought peer review. The model is based on a decentral approach, where a large group of unorganised individuals are collectively responsible for the reliability of the information. The layperson trusts on the basis of simplicity – blind trust in the brand promise of Wikipedia.

On the other hand, the model is also supremely transparent for anybody who is really interested – informed trust based on detailed knowledge of how the system actually works. In some cases, these concepts can be also be applied to algorithms. Some organisations share their algorithms or general source code on development platforms like Github, so that other developers can review, use and/or exchange ideas about them.

Application of algorithms by insurers

As soon as algorithms can be trusted, we can do great things with them. This data, namely, provides a solid basis for the taking of correct and justifiable decisions. A great example is Formula 1, where the collection and analysis of data has become an essential part of the process. Nowadays, winning and losing not only depends on the individual skills of the driver, but also on the ability to make the right decisions on the basis of data. 

Insurance companies can learn a lot from Formula 1. In both cases, there is an enormous quantity of available data, which is a fundamental part of being able to take reliable decisions on the basis of algorithms. Would you like to know more about how the insurance sector can learn from Formula 1? Then read our eBook, which you can download using the link below.

Read more about data and algorithms

Winning with a data-driven strategy