Make boards responsible for AI failures, banking regulator suggests

Boards and senior managers in financial organizations could be made directly responsible for institutional risks created by artificial intelligence under a new consultation published this week by Singapore’s financial regulatory authority.

Although the principle of executive responsibility for AI risk is already part of regulatory regimes in other parts of the world — the EU’s AI Act, for example — this appears to be the first time that guidelines have been put forward that spell out the shape of this in such detail.

By intervening now, the Monetary Authority of Singapore (MAS) has an opportunity to make clear the responsibility of boards in advance of the technology becoming more deeply embedded in the sector.

It’s a timely intervention as Singapore’s financial sector is currently in the grip of the same boom in AI investment affecting institutions across the globe. Prominent in this are three of the city-state’s biggest institutions, DBS Bank, Overseas-Chinese Banking Corporation (OCBC), and United Overseas Bank (UOB), which all recently announced plans to retrain their entire 35,000 Singapore-based workforce to use AI.

DBS earlier announced that it was cutting 4,000 roles from its 41,000 global workforce as it channels more day-to-day functions to AI. Both initiatives underline the city-state’s growing economic dependence on the technology.

With this in mind, the MAS consultation document said, “the Guidelines aim to establish a set of expectations that are generally applicable across the financial sector and may be applied in a proportionate manner across FIs of different sizes and risk profiles.”

The board of directors as AI expert

The document sets out in detail the responsibility of boards, including that it has “an adequate understanding of AI to provide effective oversight and challenge.”

It won’t be enough for boards simply to rubber stamp AI implementation: Under the proposed new regime they will be expected to assess the risk of every aspect of its implementation and agree to which individuals or committees appointed by boards will be responsible for overseeing specific elements of risk.

One anxiety is that AI will introduce new and poorly understood categories of risk. This could lead to unexpected behavior causing service disruptions, failures to spot financial crime, different kinds of undetected bias, and reputational risks caused by chatbots offering incorrect information to customers.

However, the dangers could be amplified by greater use of generative AI, which remains unpredictable and hard to test in advance of rollouts, MAS said. Here the risk level steps up a gear, taking in everything from data poisoning, prompt injection, using data without consent, legal and IP risks, and outages in underlying AI services.

The risk of using AI to assess risk

“Poor performance of AI models used for risk assessments could lead to substantial financial losses, unexpected behaviours in AI systems could disrupt critical operations, and inappropriate outputs from customer-facing AI systems could result in harm or financial loss to customers,” said MAS.

But this was only the start, MAS said: “The use of newer technologies such as AI agents, which may be granted greater autonomy and access to tools, could further amplify these risks.”

Addressing increasingly complex AI risks would require a huge amount of effort from boards to identify danger points while establishing good long-term oversight.

“Lots of things can go wrong when the entire banking system is agentic AI-driven and constantly learning and evolving. The risk can be immeasurable. Global regulators are under-estimating the implications of such complex system in totality,” commented MK Tong, CEO of IT consultancy Sotatek.

However, given Singapore’s influence as an innovator in banking and technology standards, the MAS guidelines, once finalized, could become a de facto global standard, Tong said.

“Singapore’s unique ‘proportionate, principles-based, yet comprehensive’ model offers a compelling alternative to the EU’s legislative-heavy AI Act and the US’s fragmented, enforcement-led approach,” said Tong.

The MAS guidelines are part of a wider effort to raise standards of governance and security around AI. Last month, Singapore announced its Guidelines and Companion Guide for Securing AI Systems designed to safeguard the technology from a range of widely publicized threats based on the principle of secure by design.

Nevertheless, regulation and good practice are not likely to be enough on their own. In July, a report by SecurityScorecard reported that 91% of its largest companies had earned a coveted A-grade rating for cybersecurity despite every one of them suffering supply chain breaches in the last year.

Source: cio.com