A detailed investigation into Sweden’s Social Insurance Agency, which administers the country’s highly regarded social security system, reveals troubling flaws in the use of algorithms designed to detect fraud. This new report, the latest in the Suspicion Machines series, uncovers how the agency’s reliance on automated systems to flag potential fraudsters has led to discrimination against vulnerable groups, including women, migrants, low-income earners, and those without university degrees.
The investigation, conducted by Lighthouse Reports in collaboration with Svenska Dagbladet, took more than three years to bring to light the hidden workings of the agency’s fraud-detection algorithms. Despite numerous attempts to access crucial data, including dozens of Freedom of Information Act (FOIA) requests and multiple court cases, the team was repeatedly stonewalled. Only after extensive persistence did the investigators manage to obtain an unpublished dataset from 2017, which included details of over 6,000 individuals flagged by the algorithm for investigation. This dataset revealed not just the names of those targeted, but also key demographic information, shedding light on the significant biases embedded in the system.
Working with academic experts, the team conducted a series of statistical fairness tests on the algorithm, and the results were stark. The analysis found that women, migrants, people with lower income levels, and those without a university education were disproportionately affected by the algorithm’s decisions. These groups were more likely to be wrongly flagged as potential fraudsters, exposing them to unnecessary and often humiliating investigations, along with the suspension of their benefits. For many, this created a cycle of undue stress, financial hardship, and reputational damage.
The findings raise serious questions about the use of algorithms in public administration, particularly in welfare systems where the consequences of mistakes are severe. Sweden’s social security system, typically seen as a model for other nations, has relied heavily on automation in an effort to streamline processes and prevent fraud. However, this investigation exposes a critical flaw: the technology, which was meant to protect against abuse, has instead perpetuated systemic inequalities, undermining the very values of fairness and justice that the system is built on.
This report underscores the broader issue of algorithmic accountability, particularly in the context of state-driven surveillance and welfare systems. While algorithms are often touted as objective and impartial, this case illustrates how they can replicate and even exacerbate biases present in society. The findings challenge the assumption that technology is inherently neutral and emphasize the need for greater transparency, oversight, and fairness in the use of such systems.
As debates around the role of algorithms in governance continue to unfold, this investigation serves as a crucial reminder of the human cost of technological oversight in social security systems. Sweden’s experience with its Social Insurance Agency should prompt broader discussions about how welfare systems around the world can evolve to ensure that technological solutions work to support, not disadvantage, the most vulnerable members of society.
References:
- Svenska Dagbladet – Swedish daily newspaper, co-partner in the investigation, providing additional context and coverage of the findings. Svenska Dagbladet
- Lighthouse Reports – Investigative journalism platform focused on revealing hidden systemic abuses. Lighthouse Reports
- Suspicion Machines series – Ongoing investigation into the use of algorithms in state-run welfare programs and other public services. Suspicion Machines
- The Guardian – Previous coverage of algorithmic bias and the role of technology in public policy. The Guardian