Are boards doing enough to prevent the risks from AI?

The recent announcement by the Prime Minister regarding the establishment of an Artificial Intelligence Safety Institute marks a significant stride towards understanding and mitigating the potential risks associated with AI advancements. Additionally, the allocation of funds for a supercomputer and quantum computers available to businesses is a welcome boost to the UK economy.

However, UK corporate boards must act swiftly to equip themselves for the challenges that lie ahead. Effective governance for AI is paramount. The main goal should be to foster AI adoption characterized by consistency, transparency, accountability, and openness. As with all new technologies, it is a balancing act between evolving too fast and becoming stagnant, good governance will help your board to walk this tightrope.

Are boards doing enough?

In The Chartered Governance Institute of UK and Ireland’s (CGIUKI) latest Bellwether report, a mere 13% of FTSE 350 companies had either implemented or begun to implement policies and procedures for the ethical use of AI. Astonishingly, two-thirds of FTSE 350 boards had yet to broach this critical topic, with a quarter indicating they saw no immediate need to do so.

Peter Swabey, Policy & Research Director at CGIUKI, emphasizes “As AI rapidly evolves, boards will need to be agile and adaptable in their approach to managing AI risks and opportunities. Boards which do not grasp this could fall behind their competitors or damage their reputation through the lack of a coherent approach.

Boards will need to develop a governance framework for AI that sets out clear roles and responsibilities, as well as policies and procedures for managing AI risks and opportunities. This framework should be regularly reviewed and updated to reflect changes in the business and the AI landscape.”

Too much reporting?

New company reporting requirements must be carefully monitored by the government to ensure they remain proportionate and balanced. The current approach, which involves multiple regulators developing their own processes for overseeing AI, could divert valuable time and resources away from strategic discussions and decision-making at the board level. With over 80% of FTSE 350 companies already grappling with increasing reporting demands, a balanced approach is imperative.

Essential Considerations

As a minimum, boards should consider the following:

Transparency and Accountability: AI systems must be transparent and accountable, allowing internal decision-makers, stakeholders, and regulators to understand how they influence company decisions.

Bias Recognition and Mitigation: Companies must be vigilant about potential biases within AI systems. Establishing checks and reviews can help audit and mitigate this risk, ensuring fairness and non-discrimination.

Data Governance and Privacy: Policies and procedures for data governance should be regularly reviewed to safeguard the privacy and security of collected and processed data. This will provide assurance that data is utilized in a responsible and ethical manner.

The UK is poised to seize the opportunities presented by artificial intelligence, but there is ground to cover compared to industry leaders like the USA and China. The Prime Minister's announcements provide a valuable boost, but it is on corporate boards to prepare now. By putting in place the necessary arrangements, boards can not only harness the potential of this evolving technology but also overcome any challenges that may arise from its use.

See our full press release for more for more information alongside our AI hub for further resources surrounding this burgeoning technology.

ESG Summit: 2 May 2024 Download this year's course catalogue Introduction to Corporate Governance Engage Governance podcast series Essentials of ESG: Pathway to Good Governance Join our upcoming training for governance professionals

Search CGI