Blind trust in algorithms

Approximately 2 minutes reading time

Artificial Intelligence (AI) and machine-learning are becoming an integral part of our society. The basis of these technologies is formed by algorithms. Some of these algorithms are based on very straightforward mathematical formulas, but increasingly, the underlying concepts are becoming more obscure. How can we monitor these ‘more obscure’ algorithms? And how do we know if we can trust them? These questions are becoming increasingly relevant for insurers who apply algorithms. In order to avoid a loss of trust, it is essential to anticipate the impact of this theme well in advance.

In 2018 a journalist filmed a group of tourists cycling through a tunnel in Amsterdam. Cars were speeding past them as they tried to stay close to the tunnel wall. It was a life-threatening situation, and the police had to jump in and save them. The explanation given by the tourists spoke volumes and was quite amusing: they were just following the GPS on their iPhone. If they got into trouble with the police because of it, then they would be suing Apple when they got home.

Blind trust in algorithms - CCS Connects

This anecdote confirms a couple of things:

  1. Algorithms are having more and more impact on our lives;
  2. People trust technology blindly;
  3. But this trust starts to evaporate as soon as something goes wrong or they are confronted with real danger.

The dangers of unreliable algorithms

One of the dangers of unreliable algorithms is that they draw the wrong conclusions from the available data. Nicholas Taleb – prominent thinker and author of the bestseller The Black Swan – described it this way: the more data you analyse, the more you start to see patterns in totally random coincidences. And that can sometimes put people in dangerous situations.

Taleb gives the example of a statistical correlation between the amount of time a person stays in hospital and their zodiac sign. Obviously, there is no causal relationship between these two factors, if you think about it logically. However, in a society permeated with data, there is a real danger that we start to draw conclusions from such correlations and build these into algorithms.

Another danger that is that algorithms can replicate human value judgements because they are fed with data based on human behaviour. And then there is the additional risk of inadequate professionalism in the development process of algorithms, which can lead to the (unintentional) development of unreliable applications.

Algorithms in the insurance sector

The insurance sector is not a frontrunner in the development and use of algorithms, but insurers are definitely starting to make use of them more. The pricing of premiums, for example, is partly determined by algorithms. But can we really trust these algorithms? Which affect all of our lives, whether we know it or not.

Find out by reading our eBook - Algorithms in the Insurance Sector.

Read more about this topic

eBook - Advancing use of algorithms poses a new trust issue for insurers