Amsterdam´s Smart Check and the elusive quest for algorithmic fairness

Amsterdam´s Smart Check pilot exposes the challenges in building fair, unbiased Artificial Intelligence for welfare systems—and why community input matters more than ever.

Amsterdam recently piloted an ambitious project called Smart Check, aiming to introduce a predictive algorithm that could detect welfare fraud in an effective, fair, and unbiased way. The municipality carefully followed best practices for responsible algorithm deployment, inviting stakeholder feedback, stress-testing for bias, and consulting external experts. Despite these efforts, the city fell short of its aspirations, exposing the persistent and complex nature of ensuring fairness in automated decision-making, especially within social welfare contexts.

The stakes in deploying Artificial Intelligence for programs impacting human lives become even more pronounced amid diverging governmental attitudes toward ethics and regulation. In stark contrast to Amsterdam´s approach, the United States is currently pulling back from national oversight and accountability for Artificial Intelligence, evidenced by the rescinding of executive orders and fresh legislative attempts to limit local regulation. This shifts the focus onto the social and philosophical consequences of algorithmic systems, which mere technical safeguards and engineering cannot fully anticipate or resolve.

Feedback from welfare recipients’ advocates in Amsterdam, such as the Participation Council and the Welfare Union, highlighted deep mistrust and practical concerns. These groups rejected Smart Check, fearing discrimination and unjust scrutiny, particularly given how rare welfare fraud actually is. Although city officials tweaked the system´s parameters in response to concerns—such as excluding age to avoid discriminatory outcomes—they ignored core calls to halt the project altogether. This tension underscores the ´wicked problem´ of building technology for inherently political and moral questions without genuine consensus or public mandate.

Experts, including those who contributed to high-level Artificial Intelligence ethics in the United States, argue that genuine engagement with affected communities must occur early and meaningfully—not as a box-checking exercise, but as a way to reframe what technology is even supposed to accomplish. Suggestions such as designing algorithms to proactively help eligible individuals access social benefits rather than root out rare abuses flip the narrative entirely. Ultimately, even robust intentions and responsible design can result in flawed systems if the foundational societal questions are ignored. The Amsterdam experiment stands as a humbling reminder that fairness in Artificial Intelligence is not merely a technical challenge but a layered, evolving social negotiation.

73

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend