How Artificial Intelligence is reshaping financial services oversight

Financial services regulators are largely treating Artificial Intelligence as another technology governed by existing rules rather than building new securities-specific frameworks. History suggests that clearer expectations will emerge through examinations, enforcement, and supervisory guidance.

Artificial Intelligence tools, including generative Artificial Intelligence, machine learning, and large language models, are becoming more common across financial services, creating new compliance and operational questions for firms that buy, deploy, or oversee these systems. Securities regulators in the US have not yet built a dedicated rulebook for most Artificial Intelligence uses, and instead continue to emphasize that existing regulations are technology neutral and still apply. That posture mirrors the way regulators previously handled electronic trading, cloud computing, robo-advisers, alternative trading platforms, and electronic communications with customers.

In 2023, the SEC proposed a new rule addressing Artificial Intelligence-induced conflicts of interest. This proposal has since been withdrawn, receiving mixed reviews by several SEC commissioners and industry commentary. Even without new securities-specific rules, regulators are signaling that supervision of Artificial Intelligence is intensifying. In March 2025 in SEC vs. Rimar Capital USA, the SEC claimed the respondents raised funds via false promises about the firm’s use of Artificial Intelligence for automated trading. In 2024, one FINRA action specifically mentioned Artificial Intelligence, involving a broker-dealer’s implementation of a flawed machine learning program designed to assist in their compliance with AML requirements. In its 2026 annual report, FINRA emphasized a focus on Artificial Intelligence testing and monitoring.

State governments are moving faster than securities regulators on targeted lawmaking. California, Texas and Colorado have passed comprehensive Artificial Intelligence legislation, while other states have proposed or adopted narrower measures focused on consumer privacy, deceptive media, fair use of protected works, and disclosure requirements when consumers interact with Artificial Intelligence. Those rules are not tailored specifically to financial services, but they can still affect firms operating across multiple jurisdictions, particularly where consumer rights and data handling are involved.

Past regulatory experience suggests firms should expect practical standards to emerge through examinations and enforcement rather than through immediate new rulemaking. Regulators historically used that path when firms shifted from postal mail to email and other electronic communications. In the early 2000s, regulatory actions against multiple firms clarified expectations around supervision, recordkeeping, and preservation of electronic communications after firms used inconsistent and risky storage methods. The same pattern is likely to shape Artificial Intelligence oversight, especially in areas where marketing claims, supervisory gaps, data use, and human review create risk.

Firms are being pushed toward a more proactive compliance model. Recommended steps include defining what counts as Artificial Intelligence inside the organization, setting clear rules on where employees can and cannot use such tools, training staff on authorized use and escalation procedures, restricting access to disallowed systems, documenting ongoing risk assessments, and testing for unsupervised or unapproved activity. Oversight of third-party vendors, data access, confidentiality, and cross-border legal obligations is also becoming central. Over time, firms may face pressure not only to control Artificial Intelligence risk but also to show that failing to adopt effective Artificial Intelligence tools does not leave them less capable of meeting customer expectations or regulatory demands.

52

Impact Score

Nvidia faces gamer backlash over Artificial Intelligence shift

Nvidia is facing growing frustration from gamers as memory supply is steered toward data center chips and DLSS 5 becomes more central to game performance. The dispute highlights how far the company’s priorities have shifted toward enterprise Artificial Intelligence.

Executives see limited Artificial Intelligence productivity gains so far

Corporate enthusiasm around Artificial Intelligence has yet to translate into broad gains in employment or productivity, reviving comparisons to the long lag between early computing breakthroughs and measurable economic impact. Recent surveys and studies show mixed results, with strong expectations for future benefits but little consensus on present gains.

Nvidia skips a new GeForce generation as Artificial Intelligence chips dominate

Nvidia is set to go a year without a new GeForce GPU generation for the first time since the 1990s as memory shortages and higher margins in Artificial Intelligence hardware reshape the market. AMD and Intel are also struggling to capitalize because the same supply constraints are hitting gaming products across the industry.

Where gpu debt starts to break

Stress in gpu-backed infrastructure financing is emerging around deals that lack the structural protections seen in the strongest transactions. Oracle, the Abilene Stargate project, and older CoreWeave debt illustrate different ways residual risk can surface when contracts, collateral, and counterparties fall short.

SK hynix starts mass production of 192 GB SOCAMM2

SK hynix has begun mass production of the 192 GB SOCAMM2, a next-generation memory module standard built on 1cnm LPDDR5X low-power DRAM. The module is positioned as a primary memory solution for next-generation Artificial Intelligence servers.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.