Swedish authorities urged to discontinue AI welfare system – Step-by-Step

Explore Something New: Swedish authorities urged to discontinue AI welfare system Let’s jump right in! Today, we’re sharing some cool info about Swedish authorities urged to discontinue AI welfare system. Introduction: This guide will walk …

Swedish authorities urged to discontinue AI welfare system – Step-by-Step

Explore Something New: Swedish authorities urged to discontinue AI welfare system

Let’s jump right in! Today, we’re sharing some cool info about Swedish authorities urged to discontinue AI welfare system.

Introduction: This guide will walk you through the steps of learning about Swedish authorities urged to discontinue AI welfare system with easy-to-follow instructions.


Sweden’s algorithmically powered welfare system is disproportionately targeting marginalised groups in Swedish society for benefit fraud investigations, and must be immediately discontinued, Amnesty International has said.

An investigation published by Lighthouse Reports and Svenska Dagbladet (SvB) on 27 November 2024 found that the machine learning (ML) system being used by Försäkringskassan, Sweden’s Social Insurance Agency, is disproportionally flagging certain groups for further investigation over social benefits fraud, including women, individuals with “foreign” backgrounds, low-income earners and people without university degrees.

Based on an analysis of aggregate data on the outcomes of fraud investigations where cases were flagged by the algorithms, the investigation also found the system was largely ineffective at identifying men and rich people that actually had committed some kind of social security fraud.

To detect social benefits fraud, the ML-powered system – introduced by Försäkringskassan in 2013 – assigns risk scores to social security applicants, which then automatically triggers an investigation if the risk score is high enough.

Those with the highest risk scores are referred to the agency’s “control” department, which takes on cases where there is suspicion of criminal intent, while those with lower scores are referred to case workers, where they are investigated without the presumption of criminal intent.

Once cases are flagged to fraud investigators, they then have the power to trawl through a person’s social media accounts, obtain data from institutions such as schools and banks, and even interview an individual’s neighbours as part of their investigations. Those incorrectly flagged by the social security system have complained they then end up facing delays and legal hurdles in accessing their welfare entitlement. 

“The entire system is akin to a witch hunt against anyone who is flagged for social benefits fraud investigations,” said David Nolan, senior investigative researcher at Amnesty Tech. “One of the main issues with AI [artificial intelligence] systems being deployed by social security agencies is that they can aggravate pre-existing inequalities and discrimination. Once an individual is flagged, they’re treated with suspicion from the start. This can be extremely dehumanising. This is a clear example of people’s right to social security, equality and non-discrimination, and privacy being violated by a system that is clearly biased.”

Testing against fairness metrics

Using the aggregate data – which was only possible as Sweden’s Inspectorate for Social Security (ISF) had previously requested the same data – SvB and Lighthouse Reports were able to test the algorithmic system against six standard statistical fairness metrics, including demographic parity, predictive parity and false positive rates.

They noted that while the findings confirmed the Swedish system is disproportionately targeting already marginalised groups in Swedish society, Försäkringskassan has not been fully transparent about the inner workings of the system, having rejected a number of freedom of information (FOI) requests submitted by the investigators.

They added that when they presented their analysis to Anders Viseth, head of analytics at Försäkringskassan, he did not question it, and instead argued there was no problem identified.

“The selections we make, we do not consider them to be a disadvantage,” he said. “We look at individual cases and assess them based on the likelihood of error and those who are selected receive a fair trial. These models have proven to be among the most accurate we have. And we have to use our resources in a cost-effective way. At the same time, we do not discriminate against anyone, but we follow the discrimination law.”

Computer Weekly contacted Försäkringskassan about the investigation and Amnesty’s subsequent call for the system to be discontinued.

“Försäkringskassan bears a significant responsibility to prevent criminal activities targeting the Swedish social security system,” said a spokesperson for the agency. “This machine learning-based system is one of several tools used to safeguard Swedish taxpayers’ money.

“Importantly, the system operates in full compliance with Swedish law. It is worth noting that the system does not flag individuals but rather specific applications. Furthermore, being flagged does not automatically lead to an investigation. And if an applicant is entitled to benefits, they will receive them regardless of whether their application was flagged. We understand the interest in transparency; however, revealing the specifics of how the system operates could enable individuals to bypass detection. This position has been upheld by the Administrative Court of Appeal (Stockholms Kammarrätt, case no. 7804-23).”

Nolan said if use of the system continues, then Sweden may be sleepwalking into a scandal similar to the one in the Netherlands, where tax authorities used algorithms to falsely accuse tens of thousands of parents and caregivers from mostly low-income families of fraud, which also disproportionately harmed people from ethnic minority backgrounds.

“Given the opaque response from the Swedish authorities, not allowing us to understand the inner workings of the system, and the vague framing of the social scoring ban under the AI Act, it is difficult to determine where this specific system would fall under the AI Act’s risk-based classification of AI systems,” he said. “However, there is enough evidence to suggest that the system violates the right to equality and non-discrimination. Therefore, the system must be immediately discontinued.” 

Under the AI Act – which came into force on 1 August 2024 – the use of AI systems by public authorities to determine access to essential public services and benefits must meet strict technical, transparency and governance rules, including an obligation by deployers to carry out an assessment of human rights risks and guarantee there are mitigation measures in place before using them. Specific systems that are considered as tools for social scoring are prohibited.

Sweden’s ISF previously found in 2018 that the algorithm used by Försäkringskassan “in its current design [the algorithm] does not meet equal treatment”, although the agency pushed back at the time by arguing the analysis was flawed and based on dubious grounds.

A data protection officer who previously worked for the Försäkringskassan also warned in 2020 that the system’s operation violates the European General Data Protection Regulation, because the authority has no legal basis for profiling people.

On 13 November, Amnesty International exposed how AI tools used by Denmark’s welfare agency are creating pernicious mass surveillance, risking discrimination against people with disabilities, racialised groups, migrants and refugees.

ComputerWeekly.com

  • This was brought to you by on 2024-11-29 16:00:00 from www.computerweekly.com feed.
  • Take a quick look: Swedish authorities urged to discontinue AI welfare system – Step-by-Step
  • Get the details at

Don’t miss out on topics like #CoolTech.


Stay Updated: Follow us for more fun stuff! #StayInTheKnow

Conclusion: We hope this guide has been helpful. If you have any questions, feel free to reach out!

Leave a Comment

Index