Sunday links: high risk artificial intelligence, small language models and artificial intelligence certifications

A weekly round-up covering concerns about high risk Artificial Intelligence, the growing interest in small language models and new certification and jobs initiatives from major companies.

This weekly round-up highlights developments and debates across the Artificial Intelligence landscape. A new safety assessment labeled Google Gemini as high risk for kids and teens, and the piece notes that OpenAI is facing lawsuits alleging user deaths. The author argues that current controls and guardrails applied after training large language models remain weak, and that embedding stronger ethical constraints earlier in model development is a hypothesis under discussion but requires much more work.

The newsletter also covers technical strategy. NVIDIA published an argument that small language models are key to scalable agentic systems, a position the author largely agrees with. The writer suggests small, task-specific models may be more effective than one-size-fits-all large models and speculates that broader availability of compact models could expand the market for hardware and services. Separately, Exa announced a Series B to build a search engine for Artificial Intelligence agents; the reported funding amount is Not stated. Exa focuses on semantic indexing to serve LLM agents more effectively, and the author reports using Exa to accelerate research and analysis.

On legal and business fronts, Anthropic agreed to a settlement with authors for an amount the article lists as Not stated. The settlement relates to the use of material the company did not own and does not require Anthropic to detraining or delete its trained models. The item notes that courts have not prohibited training on copyrighted books in the United States and that Anthropic recently raised capital for which the amount is Not stated. OpenAI also announced an Artificial Intelligence jobs platform and an Artificial Intelligence certifications program that are not live yet. The author is skeptical about the value of broad certifications given rapid technological change and the diversity of tools and workflows.

Finally, the newsletter reviews the US Google antitrust ruling by Judge Amit Mehta. The judge found Google abused its monopoly by paying for default placement but imposed remedies that many consider light. Google may continue to pay for placement though not exclusively, and the company must share its search index at least twice. The author suggests potential beneficiaries could include OpenAI and Meta. Wishing you a great weekend.

75

Impact Score

Why basic science deserves our boldest investment

The transistor story shows how curiosity-driven basic science, supported by long-term funding, enabled the information age and today´s Artificial Intelligence technologies. Current federal and university funding pressures risk undermining the next wave of breakthroughs.

35 innovators under 35 for 2025

mit technology review presents its 35 innovators under 35 for 2025, profiling young scientists, inventors, and entrepreneurs tackling climate change, disease, and core scientific challenges.

Our favorite Artificial Intelligence tools

A concise list of go-to Artificial Intelligence tools for video production, covering editing, audio cleanup, voiceovers, and short-form clips to save time and streamline workflows.

Top Artificial Intelligence tools for content creators

This article surveys the top Artificial Intelligence tools for content creators, comparing specialist apps and unified platforms. It highlights Zemith as an all-in-one hub alongside tools for design, audio, video, and voice.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.