Microsoft finds Artificial Intelligence biosecurity gap; Apple pulls ICEBlock app

Microsoft researchers say Artificial Intelligence exposed a previously unknown way to bypass DNA screening safeguards, while Apple removed an app that crowdsourced ICE sightings after a request from the US attorney general.

Today’s edition highlights a Microsoft research claim that Artificial Intelligence can be used to uncover a “zero day” weakness in biosecurity systems. Those safeguards, designed to screen DNA sequence orders and block potentially dangerous genetic material linked to toxins or pathogens, can reportedly be sidestepped through a method the team says defenders had not previously recognized. The finding underscores growing concerns that generative tools can accelerate both beneficial and harmful biological design workflows by probing and exploiting gaps in oversight mechanisms.

Another focal story centers on Apple removing ICEBlock, an app for reporting sightings of US Immigration and Customs Enforcement officers, from the App Store after a request by the US attorney general. Apple attributed the decision to safety risks. Observers noted a parallel to the company’s 2019 removal of a Hong Kong mapping app over public safety concerns. The app’s developer criticized the takedown, saying, “Capitulating to an authoritarian regime is never the right move.”

Elsewhere, scrutiny of Artificial Intelligence products continues. OpenAI’s parental controls were reportedly easy to circumvent, and automated alerts about teens’ risky conversations arrived only after hours. Venture investors, meanwhile, have poured a record amount into Artificial Intelligence startups this year, even as warnings about a fragile market bubble grow louder. Policy and public health developments also feature: the US federal vaccination schedule is still awaiting sign-off for updated Covid shots, leaving many people unable to get vaccinated.

Energy, platforms, and geopolitics rounded out the news cycle. The US Department of Energy canceled additional clean energy projects, cutting hundreds of previously announced awards. TikTok recommended pornography to children’s accounts despite restricted mode. China launched a new skilled worker visa in the wake of tightened US H-1B rules, drawing local backlash. Flights were grounded in Germany after multiple drone sightings amid NATO concerns about suspected Russian activity. In media, YouTube’s creator economy is increasingly challenging Hollywood’s status quo, and anti-robocall tools continue to improve, with call screening emerging as an effective first layer of defense.

One more feature interrogates creativity in the age of generative systems. While today’s tools can rapidly automate many artistic tasks, critics worry they promote passive consumption of low-quality machine-generated output. Researchers and artists are exploring co-creativity approaches that keep humans in the loop, aiming to build Artificial Intelligence systems that amplify human originality rather than replace it.

68

Impact Score

OpenAI faces criticism over scattershot strategy and mounting costs

A critical essay argues OpenAI is drifting from a coherent plan, leaning on leaks about new products while subsisting on ChatGPT subscriptions and heavy spending. It portrays the company as a conventional Artificial Intelligence startup wrestling with losses, a weak API business and underwhelming upgrades.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.