California’s automated decisionmaking technology regulations establish a new framework that targets how businesses deploy tools powered by Artificial Intelligence to make or assist in making consequential decisions about individuals. The rules are structured to regulate the use of systems that can significantly affect people in areas such as employment, housing, credit, education, insurance or access to essential services. By focusing on Automated Decisionmaking Technology, the regulations seek to increase transparency, accountability and oversight around the design, deployment and impact of these systems.
The regulations generally apply to for-profit businesses doing business in California that meet defined thresholds and that rely on Artificial Intelligence-driven tools for automated or semi-automated decision processes. Covered entities must first determine whether their technologies fall within the definition of Automated Decisionmaking Technology and then assess whether their use cases trigger the regulatory obligations. The criteria look at both the scale of operations and the nature of decisions supported by these tools, emphasizing situations where automated outputs materially influence outcomes for individuals.
Businesses that fall under the scope of the automated decisionmaking regulations face new compliance obligations that can include conducting impact assessments, implementing risk management and governance protocols and providing disclosures or notices to affected individuals. Companies may also be required to evaluate data inputs, monitor model performance and document safeguards designed to reduce discriminatory or harmful outcomes. Organizations relying on Artificial Intelligence-driven tools in California are expected to review their current practices, map their automated decision flows and prepare governance documentation to demonstrate compliance as enforcement and regulatory expectations evolve.
