Stay informed with free updates
Simply sign up to the Social affairs myFT Digest — delivered directly to your inbox.
What does it mean for a machine’s decision to be “fair”? So far, the public debate has focused mostly on the issue of bias and discrimination. That is understandable: most people would expect machines to be less biased than humans (indeed, this is often given as the rationale for using them in processes such as recruitment), so it is right to pay attention to evidence that they can be biased, too.
But the word “fair” has a lot of interpretations, and “unbiased” is only one of them. I found myself on the receiving end of an automated decision recently which made me think about what it really means to feel that you have been treated justly, and how hard it might be to hold on to those principles in an increasingly automated world.
I have a personal Gmail account which I use for correspondence about a book project I am working on. I woke up one morning in November to discover that I could no longer access it. A message from Google said my access had been “restricted globally” because “it looks as though Gmail has been used to send unwanted content. Spamming is a violation of Google’s policies.” The note said the decision had been made by “automatic processing” and that if I thought it was a mistake, I could submit an appeal.
I had not sent any spam and couldn’t imagine why Google’s algorithm thought that I had. That made it hard to know what to write in the “appeal” text box, other than a panicked version of something like, “I didn’t do it (whatever it is)!” and, “Please help, I really need access to my email and my files”. (To my relief, I realised later that I hadn’t lost access to my drive.)
Two days later, I heard back: “After reviewing your appeal, your account’s access remains restricted for this service.” I wasn’t given any more information on what I had supposedly done or why the appeal had been rejected, but was told that “if you disagree with this decision, you can submit another appeal.” I tried again and was rejected again. I did this a few more times — curious, at this point, about how long this doom loop could continue. A glance at Reddit suggested other people had been through similar things. Eventually, I gave up. (Google declined to comment on the record.)
Among regulators, one popular answer to the question of how to make automated decisions more “fair” is to insist that people can request a human to review them. But how effective is this remedy? For one thing, humans are prone to “automation complacency” — a tendency to trust the machine too much. In the case of the UK’s Post Office scandal, for example, where sub-postmasters were wrongly accused of theft because of a faulty computer system called Horizon, a judge in 2019 concluded that people at the Post Office displayed “a simple institutional obstinacy or refusal to consider any possible alternatives to their view of Horizon”.
Ben Green, an expert on algorithmic fairness at the University of Michigan, says there can be practical problems in some organisations, too. “Often times the human overseers are on a tight schedule — they have many cases to review,” he told me. “A lot of the cases I’ve looked at are instances where the decision is based on some sort of statistical prediction,” he said, but “people are not very good at making those predictions, so why would they be good at evaluating them?”
Once my impotent rage about my email had simmered down, I found I had a certain amount of sympathy with Google. With so many customers, an automated system is the only practical way to detect breaches of its policies. And while it felt deeply unfair to have to plead my case without knowing what had triggered the system, nor any explanation of pitfalls to avoid in an appeal, I could also see that the more detail Google offered about the way the system worked, the easier it would be for bad actors to get around it.
But this is the point. In increasingly automated systems, the goal of procedural justice — that people feel the process has been fair to them — often comes into conflict with other goals, such as the need for efficiency, privacy or security. There is no easy way to make those trade-offs disappear.
As for my email account, when I decided to write about my experience for this column, I emailed Google’s press office with the details to see if I could discuss the issue. By the end of the day, my access to my email account had been restored. I was pleased, of course, but I don’t think many people would see that as particularly fair either.
sarah.oconnor@ft.com
Credit: Source link