[ad_1]
Morningstar is worried the Securities and Exchange Commission’s proposed rule on AI and predictive data analytics-related conflicts of interest is too expansive, and is suggesting the regulator revisit Regulation Best Interest in lieu of creating new regulations.
In a letter released Tuesday (the final day for submitting public comment to the SEC about the proposal), Morningstar Head of Government Affairs Aron Szapiro and Associate Director of Policy Research Jasmin Sethi specified worries about how many tech and industry practices would be covered by the proposal.
In an interview with WealthManagement.com, Szapiro said the rule would be “very complicated and expensive to comply with” (while acknowledging that’s a common critique of many of the commission’s proposals).
“It puts a lot of scrutiny on products and processes that are designed right now to help reps avoid conflicts,” he said. “The commission talks about this being technology-neutral, but it’s kind of hard to see that that’s the case.”
In July, the SEC proposed new rules for firms using tech tools (particularly AI) that “optimize for, predict, guide, forecast or direct investment-related behaviors or outcomes” by mandating they determine whether conflicts inherent in using such software would place their interests ahead of investors. If so, “firms would be required to eliminate, or neutralize” those conflicts.
The proposed rule has already been subject to pushback from industry-associated lobbyists, advocates and legislators urging the SEC to withdraw it.
Representatives of the Financial Services Institute, the U.S. Chamber of Commerce and the National Association of Insurance and Financial Advisors co-signed a letter in opposition. Numerous Republican members of Congress argued in their own letter the rule as written was “misguided, unnecessarily broad, and threatens to harm both investors and our capital markets.”
The rule’s requirement that conflicts be eliminated or neutralized extends mandates on reps beyond Reg BI’s reliance on disclose or mitigation (though Szapiro felt it was still unclear to them exactly what the commission meant when using the term ‘neutralized’).
But Micah Hauptman, the director of investor protection at the Consumer Federation of America, argued the proposal “correctly acknowledges” that tech-driven conflicts are too complex and evolve too quickly for most ordinary investors to understand and protect themselves.
“There’s a significant likelihood of widespread investor harm,” he said. “And disclosure would be ineffective at addressing these concerns.”
The array of tech tools covered under the rule include behavioral prompts and other game-like features that can “exploit psychological biases or tendencies,” according to Hauptman.
And he worried the harm can multiply quickly, because unlike a one-on-one interaction with a rep, broker/dealers can apply the tools (and any accompanying conflicts) to their entire user base instantaneously. The consequences can be severe, with Hauptman recalling “horrific instances” of investors who were decimated financially by trading in risky securities.
While many industry participants stress that Reg BI already covers these conflicts (and one FINRA executive previously warned brokers that AI-generated conflicts may be covered), Hauptman questioned how far Reg BI’s protections would extend.
Hauptman argued Reg BI applies only when there is a recommendation, and if a b/d can successfully make a case they’re using advanced tech to sway investor behavior without providing an actual recommendation, the SEC’s rules governing b/d conduct would not apply.
“That could create an incentive where b/ds use prompts to urge investors to take particular actions, but their activity and the use of the technology doesn’t rise to the level of making a formal recommendation,” he said.
Even when Reg BI and the Advisers Act apply (for b/ds and investment advisors, respectively), Hauptman said relying on disclosing and mitigating conflicts would not suffice. He said under Reg BI, firm-level conflicts can be dealt with solely through disclosure, including when an investor deals with a firm and not a human representative (i.e., with an app like Robinhood).
But the combination of complex conflicts with tech made it “unreasonable” to expect ordinary investors to understand them in ways to make informed decisions, which made relying on disclosure inappropriate for regulators, Hauptman said.
“If we really boil all this down, disclosure is never going to effectively counteract highly-sophisticated technology that’s trying to manipulate human psychology,” he argued.
Szapiro acknowledged there may be a place for a narrower rule specifically focused on AI-related risks, and agreed there may be more tailoring to Reg BI to make it stronger by clarifying when conflicts do and do not apply.
But he and Sethi worried the SEC’s approach in the rule left little distinction between AI or gamification-related tech, and something like an algorithm or Excel spreadsheet. It underscored the need for a risk-based approach, a facet they felt was lacking in rule as it stands.
“An Excel formula is a different risk than a chatbot that’s constantly learning,” Sethi said. “Compliance should be commensurate with risk.”
Sethi reasoned that the expansive range of the new rule may stem partially from the SEC’s disappointment with firms that are deciding to disclose conflicts without mitigating them. But this stemmed from the SEC’s own decisions when crafting Reg BI, Sethi stressed.
“The solution is you should have been more prescriptive on when you wanted people to mitigate,” she said. “You gave people a choice, disclose or mitigate, and they chose.”
[ad_2]