How to be human in the age of the machine

This post is about and inspired by two books about what is happening in the age of the machine and big data, when algorithms wield control. One is ‘Weapons of math destruction’ by Cathy O’Neil, and the other one is ‘Hello world. How to be human in the Age of the Machine’ by Hannah Fry.

Algorithms and artificial intelligence not only suggest what you listen to next on Spotify, what you watch next on Netflix or what you buy next on Amazon. They also select whether your application is passed on to the next round, whether it is for college or for a job, whether someone in prison is eligible for parole, whether you get the loan you need for the house you want to buy. In theory, when mathematical models make decisions, this should lead to greater fairness and less bias – if only! Both books highlight examples as well as the systematic loss of transparency and accountability. If a model decides whether you get a loan – good luck contesting its decision. Most likely, no one knows how the model came to that decision, and therefore you cannot challenge its decision either, or argue with it. Welcome to the age of the machine.

I totally recommend both books as eye-openers for how much further machine-led decisions have already creeped into society than most of us would consciously be aware of. To me, two basic properties of artificial intellingence are responsible for these features that we struggle with:

  1. Machine learning is entirely empirical. It can and will learn whatever is present in the data that was used to train the model, including biases. It will not learn about the things that are not present in the data – such as underrepresented minorities (women, making up half of the world’s population, aren’t exactly a minority, but are not nearly equally represented – Caroline Criado Perez’ Invisible Women is next on my reading/discussion list).
  2. Machine learning models are usually black boxes, making the decision process both intransparent and uncontestable. Trying to understand a decision from such a model post-hoc can be a bit like trying to understand the brain, on which in fact some of these models were (very loosely!) modeled on. Well, all I can say is that neuroscientists have been trying to understand the brain for more than 100 years, and while they clearly have made huge progress, ‘understanding the brain’ is still far away.

These two basic features of artificial intelligence combine to create the conundrums that society is now grappling with. What we can say for certain that it hasn’t turned out to be fairer. Whether it ‘threatens democracy’ as Cathy O’Neil puts it – best to tell for yourself after reading her book.

Leave a Reply

Your email address will not be published. Required fields are marked *