California delays its Artificial Intelligence Transparency Act and passes new content laws

California enacted AB 853, pushing the Artificial Intelligence Transparency Act’s start date to August 2, 2026, and adding new disclosure and detection duties for generative Artificial Intelligence providers, large platforms, and device makers. Platforms face standardized source data checks and latent disclosures in 2027, with capture devices offering similar options in 2028.

On October 13, the governor of California signed AB 853 to amend the compliance date of the California Artificial Intelligence Transparency Act and to add new obligations for large online platforms. The amendment moves the act’s effective date from January 1, 2026, to August 2, 2026. It also requires covered providers that create, code or produce a generative Artificial Intelligence system to make available a free Artificial Intelligence detection tool that helps users determine whether media content was created or altered by the provider’s generative Artificial Intelligence system, and that outputs any system source data detected in the content.

Beginning January 1, 2027, large online platforms face additional responsibilities. The law defines these platforms as public-facing social media, file-sharing, mass messaging platforms, or stand-alone search engines that distribute content to users who did not create it and that exceeded 2 million unique monthly users during the preceding 12 months. These platforms must detect whether source data, compliant with widely adopted standards, is embedded in or attached to the content they distribute. They must then provide users with a latent disclosure that indicates whether the content was generated or altered by a generative Artificial Intelligence system.

Starting January 1, 2028, capture device manufacturers must offer users the option to include a latent disclosure in content captured by devices first produced for sale in California on or after that date. The required disclosure must communicate the manufacturer’s name, the device name and version, and the time and date when the content was created or altered. Together, the delayed timeline and new requirements establish a phased approach that begins with generation-layer detection tools from providers, extends to platform-level source data checks and user disclosures, and reaches device-level options for embedding standardized disclosures in newly produced hardware.

68

Impact Score

Nvidia denies report on Groq chip plans for China

Nvidia says a report that it is preparing Groq inferencing chips for shipment to China is “totally false,” even as interest in H200 sales to the country remains strong. The dispute highlights how closely watched Nvidia’s China strategy has become across training and inferencing hardware.

AMD targets desktop Artificial Intelligence PCs with Copilot+ chips

AMD has introduced the first desktop processors certified for Microsoft Copilot+, aiming to challenge Intel in x86 PCs as demand for on-device Artificial Intelligence computing rises. The company is also balancing that push with export limits that could constrain advanced chip sales in China.

Governance risk highlights from Infosecurity Magazine

Governance and risk coverage centers on regulation, compliance, cybersecurity policy, and the growing role of Artificial Intelligence in enterprise security. Recent headlines point to pressure on critical infrastructure, standards updates, insider threat guidance, and concerns over guardrails for large language models.

Vals publishes public enterprise language model benchmarks

Vals lists a broad set of public enterprise benchmarks spanning law, finance, healthcare, math, education, academics, coding, and beta agent tasks. The index highlights which models currently lead specific enterprise-focused evaluations and how widely each benchmark has been tested.

MIT method spots overconfident Artificial Intelligence models

MIT researchers developed a way to detect when large language models are confidently wrong by comparing their answers with outputs from similar models. The combined uncertainty measure outperformed standard techniques across a range of tasks and may help reduce unreliable responses.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.