Lawmakers Warn of AI Risks in Financial Services
Britain’s financial regulators are under pressure to take stronger measures to mitigate the risks posed by artificial intelligence (AI) in the financial sector. A cross-party group of lawmakers has called on the Financial Conduct Authority (FCA) and the Bank of England (BoE) to implement AI-specific stress tests, stressing the urgency of preparing for potential disruptions caused by automated systems.
The Treasury Committee released a report urging regulators to abandon their current “wait and see” approach. The report recommends proactive intervention, highlighting that the rapid integration of AI into financial services presents both significant opportunities and serious risks.
Growing Use of AI Across UK Financial Firms
According to the report, approximately 75% of UK financial institutions are currently using AI technologies. The applications vary widely, spanning from insurance claim processing to credit assessments. While these innovations enhance efficiency and accuracy, they also introduce complex challenges that regulators must address.
Committee Chair Meg Hillier expressed concern over the readiness of the financial system to manage a major AI-related incident. “Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident, and that is worrying,” she stated.
Agentic AI Poses New Challenges
The FCA has previously acknowledged the risks associated with agentic AI—a form of AI capable of making decisions and executing actions without human intervention. Unlike generative AI, agentic systems can autonomously influence market dynamics, raising the stakes for consumers and financial stability alike.
Hillier emphasized that the growing sophistication of AI systems could significantly impact consumers. “If something has gone wrong in the system, that could have a very big impact on the consumer,” she noted.
Call for Clear Guidelines and Stress Testing
The Treasury Committee is urging the FCA to publish comprehensive guidance by the end of 2026. This guidance should clarify how existing consumer protection laws apply to AI-driven services and define the responsibilities of senior managers overseeing these technologies.
Additionally, the report recommends the implementation of AI-specific stress tests. These would help financial firms prepare for potential market shocks triggered by AI systems, safeguarding both consumers and the broader economy.
Risks to Consumers and Market Stability
While acknowledging the benefits of AI, the report also outlines several key risks. These include opaque decision-making in credit evaluations, exclusion of vulnerable consumers through algorithmic personalization, increased potential for fraud, and the spread of unregulated financial advice via AI chatbots.
Experts contributing to the report also voiced concerns about financial stability. They warned that AI-driven trading could amplify herding behavior in markets, potentially leading to systemic crises. Additionally, the heavy reliance on a small group of U.S.-based tech firms for AI and cloud infrastructure presents a concentration risk that could have global ramifications.
Regulator Responses and Future Plans
An FCA spokesperson welcomed the attention on AI, stating that the regulator would review the Treasury Committee’s findings. The FCA has previously indicated that it prefers a flexible regulatory approach, given the rapid pace of technological innovation. Meanwhile, the Bank of England did not issue a comment on the report.
In a separate development, the UK Treasury appointed two new “AI Champions” to guide responsible AI adoption in financial services. Harriet Rees, Chief Information Officer at Starling Bank, and Rohit Dhawan of Lloyds Banking Group will play advisory roles in supporting the government’s AI strategy within the financial sector.
Balancing Innovation and Regulation
As AI continues to evolve, striking the right balance between encouraging innovation and ensuring consumer protection remains a challenge. The Treasury Committee’s report highlights the need for a more robust regulatory framework that keeps pace with technological advancements without stifling progress.
With AI becoming increasingly embedded in financial operations, lawmakers insist that regulators must act now to prevent future crises. Proactive measures like stress testing and clearer regulatory guidance are seen as essential steps to ensure that the benefits of AI do not come at the expense of financial stability and consumer trust.
This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.
