Describing the horrors of communism under Stalin and others, Nobel laureate Aleksandr Solzhenitsyn wrote in his magnum opus, “The Gulag Archipelago,” that “the line dividing good and evil cuts through the heart of every human being.” Indeed, under the communist regime, citizens were removed from society before they could cause harm to it. This removal, which often entailed a trip to the labor camp from which many did not return, took place in a manner that deprived the accused of due process. In many cases, the mere suspicion or even hint that an act against the regime might occur was enough to earn a one way ticket with little to no recourse. The underlying premise here that the officials knew when someone might commit a transgression. In other words, law enforcement knew where that line lies in people’s hearts.

Akhil Bhardwaj is an Associate Professor of Strategy and Organization at the University of Bath, UK. He studies extreme events, which range from organizational disasters to radical innovation. (Image credit: Akhil Bhardwaj)

The U.K. government has decided to chase this chimera by investing in a program that seeks to preemptively identify who might commit murder. Specifically, the project uses government and police data to profile people to “predict” who have a high likelihood to commit murder. Currently, the program is in its research stage, with similar programs being used for the context of making probation decisions.

Such a program that reduces individuals to data points carries enormous risks that might outweigh any gains. First, the output of such programs is not error free, meaning it might wrongly implicate people. Second, we will never know if a prediction was incorrect because there’s no way of knowing if something doesn’t happen — was a murder prevented, or would it never have taken place remains unanswerable? Third, the program can be misused by opportunistic actors to justify targeting people, especially minorities — the ability to do so is baked into a bureaucracy.

Consider: the basis of a bureaucratic state rests on its ability to reduce human beings to numbers. In doing so, it offers the advantages of efficiency and fairness — no one is supposed to get preferential treatment. Regardless of a person’s status or income, the DMV (DVLA in the U.K.) would treat the application for a driver’s license or its renewal the same way. But mistakes happen, and navigating the labyrinth of bureaucratic procedures to rectify them is no easy task.

In the age of algorithms and artificial intelligence (AI), this problem of accountability and recourse in case of errors has become far more pressing.

The ‘accountability sink’

Mathematician Cathy O’Neil has documented cases of wrongful termination of school teachers because of poor scores as calculated by an AI algorithm. The algorithm, in turn, was fueled by what could be easily measured (e.g., test scores) rather than the effectiveness of teaching (a poor performing student improved significantly or how much teachers helped students in non quantifiable ways). The algorithm also glossed over whether grade inflation had occurred in the previous years. When the teachers questioned the authorities about the performance reviews that led to their dismissal, the explanation they received was in the form of “the math told us to do so” — even after authorities admitted that the underlying math was not 100% accurate.

If a potential future murderer is preemptively arrested, “Minority Report”-style, how can we know if the person may have decided on their own not to commit murder?

Akhil Bhardwaj

As such, the use of algorithms creates what journalist Dan Davies calls an “accountability sink” — it strips accountability by ensuring that no one person or entity can be held responsible, and it prevents the person affected by a decision from being able to fix mistakes.

This creates a twofold problem: An algorithm’s estimates can be flawed, and the algorithm does not update itself because no one is held accountable. No algorithm can be expected to be accurate all the time; it can be calibrated with new data. But this is an idealistic view that does not even hold true in science; scientists can resist updating a theory or schema, especially when they are heavily invested in it. And similarly and unsurprisingly, bureaucracies do not readily update their beliefs.

To use an algorithm in an attempt to predict who is at risk of committing murder is perplexing and unethical. Not only could it be inaccurate, but there’s no way to know if the system was right. In other words, if a potential future murderer is preemptively arrested, “Minority Report”-style, how can we know if the person may have decided on their own not to commit murder? The UK government is yet to clarify how they intend to use the program other than stating that the research is being carried for the purposes of “preventing and detecting unlawful acts.”

louisiana state penitentiary entrance

In Louisiana, an algorithm is used to predict if an inmate will reoffend and this is used to make parole decisions. (Image credit: wellesenterprises/Getty Images)

We’re already seeing similar systems being used in the United States. In Louisiana, an algorithm called TIGER (short for “Targeted Interventions to Greater Enhance Re-entry”) — predicts whether an inmate might commit a crime if released, which then serves as a basis for making parole decisions. Recently, a 70-year-old nearly blind inmate was denied parole because TIGER predicted he had a high risk of re-offending..

In another case that eventually went to the Wisconsin Supreme Court (State vs. Loomis), an algorithm was used to guide sentencing. Challenges to the sentence — including a request for access to the algorithm to determine how it reached its recommendation — were denied on grounds that the technology was proprietary. In essence, the technological opaqueness of the system was compounded in a way that potentially undermined due process.

Equally, if not more troublingly, the dataset underlying the program in the U.K. — initially dubbed the Homicide Prediction Project — consists of hundreds of thousands of people who never granted permission for their data to be used to train the system. Worse, the dataset — compiled using data from the Ministry, Greater Manchester Police of Justice, and the Police National Computer — contains personal data, including, but not limited to, information on addiction, mental health, disabilities, previous instances of self-harm, and whether they had been victims of a crime. Indicators such as gender and race are also included.

These variables naturally increase the likelihood of bias against ethnic minorities and other marginalized groups. So the algorithm’s predictions may simply reflect policing choices of the past — predictive AI algorithms rely on statistical induction, so they project past (troubling) patterns in the data into the future.

In addition, the data overrepresents Black offenders from affluent areas as well as all ethnicities from deprived neighborhoods. Past studies show that AI algorithms that make predictions about behavior work less well for Black offenders than they do for other groups. Such findings do little to allay genuine fears that racial minority groups and other vulnerable groups will be unfairly targeted.

In his book, Solzhenitsyn informed the Western world of the horrors of a bureaucratic state grinding down its citizens in service of an ideal, with little regard for the lived experience of human beings. The state was almost always wrong (especially on moral grounds), but, of course, there was no mea culpa. Those who were wronged were simply collateral damage to be forgotten.

Now, half a century later, it is rather strange that a democracy like the U.K. is revisiting a horrific and failed project from an authoritarian Communist country as a way of “protecting the public.” The public does need to be protected — not only from criminals but also from a “technopoly” that vastly overestimates the role of technology in building and maintaining a healthy society.

Share.

Leave A Reply

Exit mobile version