Brokerages should monitor not only their own use of AI, but also how third-party service providers and scammers are employing it.
That’s according to the “2026 FINRA Annual Regulatory Oversight Report” issued Tuesday by the Financial Industry Regulatory Authority, the self-regulatory organization for the broker-dealer industry. The yearly report gives brokers insight into how recent compliance trends and tendencies are likely to influence the regulator’s enforcement priorities in the coming year.
Although FINRA has discussed AI in previous reports, the 2026 edition is the first to devote a special section to risks stemming from artificial intelligence, machine learning and similar technologies. The report from FINRA, which oversees roughly 3,300 firms and 624,000 registered representatives, also covered perennial regulatory topics such as cybersecurity, cryptocurrency, money laundering and elder fraud.
This year’s findings and recommendations were published about a month earlier than usual. In a podcast FINRA released Tuesday to discuss the report, agency leaders said the early release came in response to many firms saying they rely heavily on the oversight report to set their regulatory priorities.
Ornella Bergeron, FINRA acting head of member supervision unit, said she and others at FINRA heard the industry “loud and clear.”
“They wanted the report out sooner so that … they can have the information in it and leverage it as part of their compliance planning for 2026,” she said, “especially since we have new areas in the report like GenAI.”
Novel uses for a new technology
Ever since the release of OpenAI’s ChatGPT chatbot in late 2022 showed that AI could produce reams of convincing text almost instantly, wealth managers have been scrambling to use the technology to improve advisor productivity. FINRA’s oversight report lists 14 reasons brokerages are now turning to AI and similar technologies.
They include taking notes of discussions with clients, detecting patterns in market research and other data, automating administrative procedures, drafting official documents and producing models predicting how financial markets are likely to perform.
Bryan Smith, FINRA acting head of strategic intelligence, said on the podcast that many firms seem to struggle with knowing who may have compliance responsibilities arising from AI.
Now that FINRA has identified 14 specific uses, “It really starts to put together a structure to think: Oh, here’s how I can think about this,” Smith said. “These are areas I should be concerned with, whereas other areas may be a different group within that firm.”
Among other risks, FINRA calls on brokers to be aware of two common pitfalls with AI: hallucinations (seemingly convincing but ultimately misleading answers to users’ questions) and biases (a tendency to favor certain responses over other — perhaps equally legitimate — ones). FINRA’s warnings also extend to AI “agents,” or systems set up to undertake certain tasks with no or very little human intervention. The regulator says firms must make sure any AI agents they’ve adopted don’t go beyond their intended scope and take actions they were never meant to.
Concerns over third-party service providers and scammers
But brokerages’ compliance concerns don’t end within their own four walls.
FINRA says firms also need to be aware and wary of how any outside firms they’ve contracted for certain services are using AI. In particular, they need to ensure that third parties are taking steps to ensure any private client information they enter into AI systems is not being shared with the general public.
FINRA also warns about the many ways scammers are now using AI to defraud investors. These include employing image-generating software to produce fake images or videos of real clients and automate attempts to thwart firms’ cybersecurity defenses.
In FINRA’s podcast, Head of Market Oversight Feral Talib said regulators are using AI to detect anomalies in trading activity that may indicate attempts to manipulate the markets. He said FINRA and scammers are now locked in a “digital arms race” to stay ahead of each other with innovations in AI and other cutting-edge tech.
“As bad actors become more sophisticated, control systems have to get more sophisticated and keep up with it as well,” Talib said.
Regulating AI with existing rules rather than adopting new ones
This isn’t the first time regulators have paid special attention to AI. In 2023, the Securities and Exchange Commission put forward a rule that would have made investment advisors responsible for elimination or “neutralizing” any conflict of interest that may arise from their use of AI and similar technologies. But that rule was abandoned this year under the deregulatory push ushered in by President Donald Trump’s administration.
Rather than impose new requirements, FINRA explains to firms how existing rules apply to AI. Again and again, FINRA calls for the adoption of clear policies laying out allowable uses of AI, monitoring to make sure employees are staying within those well-defined limits and testing to make sure the AI systems themselves aren’t producing bogus results or going beyond their remits.
Brian Robling, a consultant at the compliance firm SEC³, said regulators are right to avoid adopting new AI restrictions that could ultimately stifle innovation.
“The first phase is to put firms on notice they are accountable under existing rules,” he said. “The second is to find patterns of abuse and identify those. And as those patterns emerge, there may be a common thread where you can see there may be a need for new rules.”
New data privacy requirements under Regulation S-P
FINRA’s report meanwhile makes note of various regulatory requirements for broker-dealers. The Securities and Exchange Commission, for instance, adopted in 2024 a slew of changes giving firms additional obligations with private client data under a privacy rule known as Regulation S-P.
For investment advisors, brokers and other affected firms, the biggest change is a requirement calling on them to alert clients of any data breach that threatens to expose private data. The revisions give firms 30 days to sound the alarm whenever a security lapse creates “a reasonably likely risk of substantial harm or inconvenience to an individual identified with the information.”
Reg S-P further extends that reporting requirement to any third-party service provider that firms might contract with for various tasks. Investment advisors, brokers and other institutions are required to draw up contracts giving outside vendors no more than 72 hours to report data breaches.
For investment advisors with $1.5 billion or more in assets under management, broker-dealers with $500,000 or more in total capital, and other “large institutions,” the deadline for complying with the Regulation S-P revisions was Dec. 3. Smaller firms have until June 3 next year.
Lapses with anti-money laundering and getting ‘trusted contact persons’
With attempts to fight money laundering, FINRA noted many firms are failing to take required
steps such as requiring clients to furnish IDs providing they are who they say they are before allowing them to open accounts. FINRA said money launderers and other scammers continue to use fake identification documents, often drawn up with the help of AI, to open new accounts or take over existing ones from legitimate clients.
FINRA head of enforcement Bill St. Louis said on the podcast that there is nothing new in the anti-money laundering requirements. Yet regulators are still seeing many of the same violations.
“We’re seeing firms who have failed to maintain written supervisory procedures reasonably designed to detect and report suspicious activity,” he said. “We’re seeing issues around inadequate customer due diligence.”
FINRA also noted failures with its rule requiring firms to try to list for each client a “trusted contact person” who can be reached in cases of suspect activity in an account. The rule is most often cited in attempts to prevent elderly investors from falling prey to scams, but FINRA’s oversight report reminds firms it applies to clients of all ages.
FINRA suggested various steps firms can take to encourage more clients to name a trusted contact. For instance, documents used to open new accounts should ask, “Who is your trusted contact person,” rather than the more easily dismissed, “Do you want to name a trusted contact person?”
Failures to explain investment recommendations under Reg BI
The oversight report also notes that firms sometimes fail to explain the reasoning behind their investment recommendations. The SEC conduct rule known as Regulation Best Interest requires brokers to always do what’s best for clients and disclose conflicts of interest.
FINRA said firms have been known to recommend investors move money out of one type of account into another without considering if the change will cost more or looking for less-expensive options. It also calls out brokerages for recommending clients put money into often-risky alternative investments like cryptocurrency or private equity and credit without properly taking into account their investing goals and the heavy fees they may incur.
FINRA Forward and listening to the industry
FINRA noted that many of its compliance suggestions this year were drawn up in response to a wide-ranging attempt to modernize its rules through an initiative called FINRA Forward. Among other things, FINRA has been eliciting opinions from its member firms on changes it might make to become a more effective regulator.
It has also put forward specific rule proposals. Earlier this year, for instance, FINRA proposed loosening brokers’ current requirements for reporting outside business activities like weekend bartending or Uber driving gigs held in addition to a main job.
Bergeron said it’s important for firms to stay in touch with regulators not only to talk about improving rules but also to alert them to new investor risks. She said most firms have adopted a “conservative and measured approach” with novel technologies like AI, “especially when it comes to customer-facing interactions.”
She added, “I also want to encourage firms to continue having those conversations with their risk-monitoring teams as gen AI issues arise or as they’re planning to do more in this space.”



















