Amsterdam´s Smart Check and the elusive quest for algorithmic fairness

Amsterdam´s Smart Check pilot exposes the challenges in building fair, unbiased Artificial Intelligence for welfare systems—and why community input matters more than ever.

Amsterdam recently piloted an ambitious project called Smart Check, aiming to introduce a predictive algorithm that could detect welfare fraud in an effective, fair, and unbiased way. The municipality carefully followed best practices for responsible algorithm deployment, inviting stakeholder feedback, stress-testing for bias, and consulting external experts. Despite these efforts, the city fell short of its aspirations, exposing the persistent and complex nature of ensuring fairness in automated decision-making, especially within social welfare contexts.

The stakes in deploying Artificial Intelligence for programs impacting human lives become even more pronounced amid diverging governmental attitudes toward ethics and regulation. In stark contrast to Amsterdam´s approach, the United States is currently pulling back from national oversight and accountability for Artificial Intelligence, evidenced by the rescinding of executive orders and fresh legislative attempts to limit local regulation. This shifts the focus onto the social and philosophical consequences of algorithmic systems, which mere technical safeguards and engineering cannot fully anticipate or resolve.

Feedback from welfare recipients’ advocates in Amsterdam, such as the Participation Council and the Welfare Union, highlighted deep mistrust and practical concerns. These groups rejected Smart Check, fearing discrimination and unjust scrutiny, particularly given how rare welfare fraud actually is. Although city officials tweaked the system´s parameters in response to concerns—such as excluding age to avoid discriminatory outcomes—they ignored core calls to halt the project altogether. This tension underscores the ´wicked problem´ of building technology for inherently political and moral questions without genuine consensus or public mandate.

Experts, including those who contributed to high-level Artificial Intelligence ethics in the United States, argue that genuine engagement with affected communities must occur early and meaningfully—not as a box-checking exercise, but as a way to reframe what technology is even supposed to accomplish. Suggestions such as designing algorithms to proactively help eligible individuals access social benefits rather than root out rare abuses flip the narrative entirely. Ultimately, even robust intentions and responsible design can result in flawed systems if the foundational societal questions are ignored. The Amsterdam experiment stands as a humbling reminder that fairness in Artificial Intelligence is not merely a technical challenge but a layered, evolving social negotiation.

73

Impact Score

AMD teases Ryzen Artificial Intelligence PRO 400 desktop APU for AM5

AMD has quietly revealed its Ryzen Artificial Intelligence PRO 400 desktop APU design during a Lenovo Tech World presentation, signaling a shift away from legacy desktop APU branding. The socketed AM5 part is built on 4 nm ‘Gorgon Point’ silicon and targets next generation Artificial Intelligence enhanced desktops.

Inside the new biology of vast artificial intelligence language models

Researchers at OpenAI, Anthropic, and Google DeepMind are dissecting large language models with techniques borrowed from biology and neuroscience to understand their strange inner workings and risks. Their early findings reveal city-size systems with fragmented “personalities,” emergent misbehavior, and new ways to monitor and constrain what these models do.

Why meaningful technology still matters

A decade of mundane apps and business model tweaks fueled skepticism about the tech industry, but quieter advances in fields like quantum computing and gene editing suggest technology can still tackle profound global problems.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.