FCA warns finance firms over AI fraud

Financial services firms in the region must be extra-vigilant against the threat of AI causing disruption in “ways and at a scale not seen before”, says the UK’s top financial regulator.

The Financial Conduct Authority warns they will take action against AI-based fraud.

Chief Executive Nikhil Rathi said that there are risks of “cyber fraud, cyber-attacks and identity fraud increasing in scale and sophistication and effectiveness” as artificial intelligence (AI) becomes more widespread.

Prime minister, Rishi Sunak, is hoping to make the UK a centre for the regulation of AI. The FCA’s work on AI is part of a broader effort to work out how to regulate the big tech sector as it increasingly offers financial products.

Rathi has warned that AI technology will increase risks for financial firms in particular. Senior managers at those firms will be “ultimately accountable for the activities of the firm”, including decisions taken by AI, he said.

“As AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate simultaneously.” “We will take a robust line on this – full support for beneficial innovation alongside proportionate protections.”

Rathi will used the example of a recent “deepfake” video of the personal finance campaigner Martin Lewis supposedly selling speculative investments.

Lewis said the video was “terrifying” and called for regulators to force big technology companies to take action to stop similar scams.

The FCA recently published their feedback statement on Big Tech in Financial Services, saying, “We have announced a call for further input on the role of Big Tech firms as gatekeepers of data and the implications of the ensuing data-sharing asymmetry between Big Tech firms and financial services firms.

“We are also considering the risks that Big Tech may pose to operational resilience in payments, retail services and financial infrastructure. And we are mindful of the risk that Big Tech could pose in manipulating consumer behavioural biases.

“Partnerships with Big Tech can offer opportunities – particularly by increasing competition for customers and stimulating innovation – but we need to test further whether the entrenched power of Big Tech could also introduce significant risks to market functioning.

“What does it mean for competition if Big Tech firms have access to unique and comprehensive data sets such as browsing data, biometrics and social media?

“Coupled with anonymised financial transaction data, over time this could result in a longitudinal data set that could not be rivalled by that held by a financial services firm and it will be a data set that could cover many countries and demographics.

“Separately, with so many financial services using Critical Third Parties – indeed, as of 2020, nearly two thirds of UK firms used the same few cloud service providers – we must be clear where responsibility lies when things go wrong. Principally this will be with the outsourcing firm, but we want to mitigate the potential systemic impact that could be triggered by a Critical Third Party.

“Together with the Bank of England and PRA, we will therefore be regulating these Critical Third Parties – setting standards for their services – including AI services – to the UK financial sector. That also means making sure they meet those standards and ensuring resilience.”

Previous articleBeyond Sales & Marketing – achieving sustained revenue growth for your business
Next article2023 Autumn Farming Conference addresses key issues facing farmers in the East of England