March 2026 roundup on privacy, security and artificial intelligence regulation

State, federal, and international regulators are accelerating privacy, child safety, and artificial intelligence oversight, with new enforcement actions, lawsuits, and delayed European Union implementation timelines reshaping compliance expectations.

U.S. states continued to expand privacy and Artificial Intelligence regulation in early 2026, led by California’s proposed whistleblower protections under AB 2021, which would create an award program and anti-retaliation provisions for individuals reporting California Consumer Privacy Act violations. A separate California bill, Senate Bill 923, would significantly broaden deletion rights by forcing businesses to erase data obtained from brokers and other third parties while tightening accessibility requirements for consumer privacy requests. California Senate Bill 574, which passed the Senate unanimously, would impose explicit Artificial Intelligence guardrails on lawyers and arbitrators, including a duty to prevent confidential data from entering public Artificial Intelligence systems, mandatory verification of Artificial Intelligence generated content, and a ban on arbitrators delegating decision making to Artificial Intelligence. Other states moved on youth and comprehensive privacy: Connecticut’s attorney general detailed 2025 enforcement under the Connecticut Data Privacy Act, Maine advanced an online data privacy act modeled on Maryland’s law, Virginia prepared to enforce social media time limits for users under 16 beginning on January 1, 2026, and Oklahoma’s Senate Bill 546 moved toward enactment as a Virginia style comprehensive privacy statute effective January 1, 2027.

At the federal level, regulators focused on children’s privacy, Artificial Intelligence safety, and cross border data risks. The Federal Trade Commission issued a policy statement that it will not bring Children’s Online Privacy Protection Rule enforcement actions against general audience and mixed audience services that use children’s personal information solely for age verification, provided strict limitations on use, retention, disclosure, and security are met, and signaled a formal review of the rule. The National Institute of Standards and Technology launched an Artificial Intelligence Agent Standards Initiative to promote secure, interoperable autonomous agents, organized around industry led standards, open source protocols, and research into security and identity. Forty state attorneys general urged Congress to pass the Senate version of the Kids Online Safety Act, criticizing the House bill’s broader preemption language and lack of a “duty of care” requirement. In the courts, the Fifth Circuit held in Bradford v. Sovereign Pest Control of TX, Inc. that oral consent satisfies “prior express consent” for prerecorded telemarketing calls under the Telephone Consumer Protection Act, a ruling that departs from long standing Federal Communications Commission guidance. New litigation targeted Artificial Intelligence hiring tools, as plaintiffs accused Eightfold Artificial Intelligence of operating a consumer reporting agency without Fair Credit Reporting Act compliance by aggregating data from more than 1.5 billion global data points to score candidates.

Enforcement actions highlighted the growing risk exposure for consumer platforms and data intensive businesses. The California attorney general reached a settlement with the Walt Disney Company after finding systemic failures to honor California Consumer Privacy Act opt outs across devices, services, and embedded third party ad tech, and emphasized that businesses cannot force consumers to opt out device by device or service by service. The California Privacy Protection Agency secured a 1.1 million settlement with youth sports media firm 2080 Media, Inc. over tracking technologies, inadequate opt out mechanisms, and outdated privacy notices affecting approximately 1,400 California schools. National security concerns surfaced in a proposed class action alleging Lenovo (United States) Inc. routed data from approximately 55 tracking technologies on its website to a Chinese parent in violation of the Department of Justice’s Bulk Data Transfer Rule, while Florida’s attorney general formed the CHINA Prevention Unit to scrutinize entities with foreign adversary ties. Additional actions included Federal Trade Commission warning letters to 13 data brokers under the Protecting Americans’ Data from Foreign Adversaries Act of 2024, child safety lawsuits against Snap and Roblox, and a Texas settlement requiring Samsung to obtain express consent before using automatic content recognition on smart TVs.

Internationally, implementation of the European Union Artificial Intelligence Act faltered as the European Commission missed the February 2, 2026 deadline to publish guidelines clarifying which systems qualify as high risk, compounding earlier delays in technical standards and codes of practice and prompting a Digital Omnibus proposal to postpone high risk obligations beyond August 2026. The European Data Protection Board and European Data Protection Supervisor jointly warned that proposed changes to the definition of “personal data” in the same package would narrow protections and increase legal uncertainty, even as they supported measures such as extending data breach notification deadlines from 72 to 96 hours and enabling machine readable consent signals to mitigate cookie fatigue. The Court of Justice of the European Union clarified that data gathered through direct observation, such as body worn cameras, is treated as “collected from the data subject” for purposes of article 13 transparency obligations, confirming that layered notices like signage plus website details can satisfy disclosure duties. The United States also strengthened its trade based data flows, announcing reciprocal trade agreements with Argentina and Bangladesh that facilitate adequacy findings and commit the parties to “permit the free transfer of data across trusted borders.”

62

Impact Score

How generative artificial intelligence is reshaping talent pipelines

Generative artificial intelligence is expected to erase millions of entry-level roles while opening access to higher-skilled jobs, forcing employers to redesign career paths, training, and hiring. Leaders will need stronger social skills as technical work is increasingly mediated by artificial intelligence tools.

Nvidia and Meta plan millions of additional artificial intelligence GPUs

Nvidia and Meta are reportedly planning to expand their use of graphics processors for artificial intelligence workloads by ordering millions of additional chips, a shift that could reshape the traditional server CPU market. The move highlights growing competition for Intel and AMD in data center infrastructure as demand for accelerated computing surges.

Leap 71s noyron model targets automated engineering design

Leap 71’s noyron system serves as a foundational computational model that encodes expert engineering knowledge, physics and manufacturing rules to automatically generate and evaluate designs. It underpins specialized models for rockets, electromagnetic systems and heat exchangers while continuously improving through feedback from real-world use.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.