Sunday links: high risk artificial intelligence, small language models and artificial intelligence certifications

A weekly round-up covering concerns about high risk Artificial Intelligence, the growing interest in small language models and new certification and jobs initiatives from major companies.

This weekly round-up highlights developments and debates across the Artificial Intelligence landscape. A new safety assessment labeled Google Gemini as high risk for kids and teens, and the piece notes that OpenAI is facing lawsuits alleging user deaths. The author argues that current controls and guardrails applied after training large language models remain weak, and that embedding stronger ethical constraints earlier in model development is a hypothesis under discussion but requires much more work.

The newsletter also covers technical strategy. NVIDIA published an argument that small language models are key to scalable agentic systems, a position the author largely agrees with. The writer suggests small, task-specific models may be more effective than one-size-fits-all large models and speculates that broader availability of compact models could expand the market for hardware and services. Separately, Exa announced a Series B to build a search engine for Artificial Intelligence agents; the reported funding amount is Not stated. Exa focuses on semantic indexing to serve LLM agents more effectively, and the author reports using Exa to accelerate research and analysis.

On legal and business fronts, Anthropic agreed to a settlement with authors for an amount the article lists as Not stated. The settlement relates to the use of material the company did not own and does not require Anthropic to detraining or delete its trained models. The item notes that courts have not prohibited training on copyrighted books in the United States and that Anthropic recently raised capital for which the amount is Not stated. OpenAI also announced an Artificial Intelligence jobs platform and an Artificial Intelligence certifications program that are not live yet. The author is skeptical about the value of broad certifications given rapid technological change and the diversity of tools and workflows.

Finally, the newsletter reviews the US Google antitrust ruling by Judge Amit Mehta. The judge found Google abused its monopoly by paying for default placement but imposed remedies that many consider light. Google may continue to pay for placement though not exclusively, and the company must share its search index at least twice. The author suggests potential beneficiaries could include OpenAI and Meta. Wishing you a great weekend.

75

Impact Score

Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

GPUBreach bypasses IOMMU on GDDR6-based NVIDIA GPUs

Researchers from the University of Toronto describe GPUBreach, a rowhammer attack against GDDR6-based NVIDIA GPUs that can bypass IOMMU protections. The technique enables CPU-side privilege escalation by abusing trusted GPU driver behavior on the host system.

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.