Amsterdam Abandons AI Welfare Fraud Detection After Bias Issues

Amsterdam Abandons AI Welfare Fraud Detection After Bias Issues

Photo by Google DeepMind on Pexels

Amsterdam has scrapped its ‘Smart Check’ AI pilot program designed to detect welfare fraud, admitting the system failed to achieve fairness and, in some cases, exacerbated existing biases. The ambitious project, intended to enhance efficiency and eliminate human prejudice from the city’s social benefits system, ultimately proved incapable of providing equitable outcomes. Despite incorporating extensive consultations, bias testing, and various safeguards, the AI model consistently struggled to avoid discriminatory practices. The system’s biases shifted during development, initially targeting migrants and men before disproportionately impacting Dutch nationals and women after data reweighting. City officials concluded that the AI could not guarantee fairness in welfare applicant evaluations and subsequently terminated the program. This outcome raises serious questions about the feasibility of using algorithms for high-stakes decision-making impacting vulnerable populations. While the city is reviewing the lessons learned, there are currently no plans to implement AI in future welfare application assessments. The investigation leading to this conclusion was a collaborative effort involving MIT Technology Review, Lighthouse Reports, and Trouw.