Amsterdam´s Smart Check and the elusive quest for algorithmic fairness

Amsterdam´s Smart Check pilot exposes the challenges in building fair, unbiased Artificial Intelligence for welfare systems—and why community input matters more than ever.

Amsterdam recently piloted an ambitious project called Smart Check, aiming to introduce a predictive algorithm that could detect welfare fraud in an effective, fair, and unbiased way. The municipality carefully followed best practices for responsible algorithm deployment, inviting stakeholder feedback, stress-testing for bias, and consulting external experts. Despite these efforts, the city fell short of its aspirations, exposing the persistent and complex nature of ensuring fairness in automated decision-making, especially within social welfare contexts.

The stakes in deploying Artificial Intelligence for programs impacting human lives become even more pronounced amid diverging governmental attitudes toward ethics and regulation. In stark contrast to Amsterdam´s approach, the United States is currently pulling back from national oversight and accountability for Artificial Intelligence, evidenced by the rescinding of executive orders and fresh legislative attempts to limit local regulation. This shifts the focus onto the social and philosophical consequences of algorithmic systems, which mere technical safeguards and engineering cannot fully anticipate or resolve.

Feedback from welfare recipients’ advocates in Amsterdam, such as the Participation Council and the Welfare Union, highlighted deep mistrust and practical concerns. These groups rejected Smart Check, fearing discrimination and unjust scrutiny, particularly given how rare welfare fraud actually is. Although city officials tweaked the system´s parameters in response to concerns—such as excluding age to avoid discriminatory outcomes—they ignored core calls to halt the project altogether. This tension underscores the ´wicked problem´ of building technology for inherently political and moral questions without genuine consensus or public mandate.

Experts, including those who contributed to high-level Artificial Intelligence ethics in the United States, argue that genuine engagement with affected communities must occur early and meaningfully—not as a box-checking exercise, but as a way to reframe what technology is even supposed to accomplish. Suggestions such as designing algorithms to proactively help eligible individuals access social benefits rather than root out rare abuses flip the narrative entirely. Ultimately, even robust intentions and responsible design can result in flawed systems if the foundational societal questions are ignored. The Amsterdam experiment stands as a humbling reminder that fairness in Artificial Intelligence is not merely a technical challenge but a layered, evolving social negotiation.

73

Impact Score

Uk delays Artificial Intelligence copyright reform

The UK government has postponed immediate copyright reform for Artificial Intelligence, leaving developers, creatives, and rightsholders to operate under existing law. Licensing, transparency, digital replicas, and future litigation are now set to shape the next phase of policy.

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Chinese tech firms and Li Fei-Fei push world models forward

Chinese tech companies and Li Fei-Fei’s World Labs are accelerating work on world models, a field focused on helping Artificial Intelligence learn from and interact with physical reality. Alibaba’s new Happy Oyster system targets real-time virtual world creation with more continuous user control.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.