Artificial Intelligence

I Took an Algorithm to Court in Sweden. The Algorithm Won

I took an algorithm to court in Sweden. The algorithm won

In 2020, the city of Gothenburg, Sweden, implemented an algorithm to allocate school placements for students. The intention was to optimize the admissions process, making it more efficient and streamlined. However, this decision led to unexpected chaos, revealing the potential dangers of relying on automated systems in public services.

The Promise of Efficiency

Municipalities often face challenges in managing school admissions, particularly when it comes to geographical catchment areas. The introduction of an algorithm was framed as a solution to these administrative headaches. It was presented as a neutral, objective tool designed to calculate distances, preferences, and capacity in a way that would enhance public efficiency.

The Unfolding Crisis

Despite these promises, the algorithm’s implementation resulted in significant issues. Hundreds of children found themselves assigned to schools located miles away from their homes, across rivers and fjords, and over major highways. Parents were left in disbelief as they learned that their children were placed in schools in neighborhoods they had never visited and had no connection to.

Questions arose: Had anyone considered whether a 13-year-old could reasonably walk to these schools during the winter? What criteria guided these decisions? The schools administration offered little clarity, leaving parents frustrated and confused. The situation was not just a matter of individual dissatisfaction; it was a systemic failure.

Revelation of Flawed Instructions

Nearly a year later, city auditors confirmed the suspicions of many concerned families: the algorithm had been given flawed instructions. It calculated distances “as the crow flies,” ignoring the realities of walking routes. In a city divided by a major river, this oversight meant that many children faced hour-long commutes to school. The law stipulates that students should walk or cycle to school, but for many, this was not feasible.

In response to the outcry from families, the city made improvements for the following school year. However, for approximately 700 children already affected by the faulty algorithm, the situation remained unchanged. They would spend their entire junior high years in schools that were deemed “wrong” for them.

The Limitations of Individual Appeals

The official response from the city was that individual appeals were sufficient to address the issue. However, this perspective missed a crucial point: algorithms do not merely make isolated decisions; they create systems of decisions. When a group of children is wrongly placed in schools, it displaces others, creating a cascading effect. As the errors multiply, the injustice becomes increasingly difficult to detect and contest.

Algorithmic Injustice: A Broader Context

The problems in Gothenburg are not unique to Sweden. Similar algorithmic injustices have been observed across Europe. For example, the UK’s Post Office scandal involved the Horizon IT system, which falsely accused hundreds of post office operators of theft, leading to wrongful prosecutions and severe personal consequences. In the Netherlands, a childcare benefits scandal saw thousands of parents wrongly flagged as fraudsters, resulting in significant financial and emotional distress for families.

In both cases, the automated systems operated behind a veil of technical complexity, and accountability lagged as harm deepened over time. The failures of these systems highlight the urgent need for scrutiny and transparency in algorithmic decision-making.

Taking Action: A Legal Challenge

As a researcher in technology and a former lawyer, I felt compelled to take action. My son was among those affected by the algorithm, and I knew that simply appealing his placement would not address the systemic error. Thus, I decided to sue the city, not just for my son’s individual placement but to challenge the legality of the entire decision-making system.

Without access to the algorithm, my requests for disclosure had gone unanswered. Therefore, I conducted a detailed analysis of hundreds of placements, using addresses and school choices to reconstruct how the system operated. I presented this evidence in court.

The Court’s Response

The city’s defense was astonishingly straightforward. They claimed that the decision-making system was merely a “support tool” and asserted that they had done nothing wrong. They provided no technical documentation, no code, and no explanation of their processes to support their claims.

To my surprise, the court placed the burden of proof on me. The judges stated that it was my responsibility to demonstrate that the system was unlawful. My analysis of decisions was insufficient, as I could not provide direct evidence of the algorithm’s code. Consequently, the case was dismissed. The implication was clear: without access to the black box of the algorithm, I could not prove its flaws.

Reflections on Accountability

This experience highlighted a troubling reality: while we know that algorithms can fail, the systems in place to address these failures are inadequate. Courts are designed to compel disclosure and scrutiny, yet when faced with opaque decision-making systems, the burden often falls on individuals to prove wrongdoing. This creates a significant barrier to accountability.

As we continue to integrate algorithms into public services, it is essential to ensure that these systems are transparent and accountable. The stakes are high, and the potential for harm is real. We must advocate for mechanisms that allow for scrutiny and correction when automated systems fail.

Note: The implications of this case extend beyond individual experiences, highlighting the importance of accountability in algorithmic decision-making systems. As technology continues to evolve, society must remain vigilant in ensuring that these systems serve the public good.

Disclaimer: A Teams provides news and information for general awareness purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of any content. Opinions expressed are those of the authors and not necessarily of A Teams. We are not liable for any actions taken based on the information published. Content may be updated or changed without prior notice.