The wonders of AI

In this article Maria Brookes discusses the applications, concerns and opportunities that surround AI and big data.

In June this year, the BBC reported that the missing edges of Rembrandt’s 1642 painting, The Night Watch had been ‘restored’ virtually, using artificial intelligence (AI).

Rembrandt’s original canvas had been trimmed in 1715 to fit between doors at Amsterdam’s City Hall. Since then, the painting has been missing some 60cm from the left, 22cm from the top, 12cm from the bottom and 7cm from the right. Now, however, the Rijksmuseum in Amsterdam has harnessed the power of modern technology, training AI to recreate the missing parts of the painting using a high-resolution scan of the original and a painted copy by Gerrit Lundens, on display in London’s National Gallery. A masterpiece restored to its original glory – what’s not to like?

The idea of using machines to do things that would previously have been done by humans, might even have been considered impossible, does not sit well with everyone. Concerns have been raised on a number of fronts – the human element, yes, but also about machine bias, privacy and the ethics of computers using data sets to make predictions or steer consumers in a certain way.

Fears of mass surveillance à la ‘1984’ and dystopian regimes harvesting people’s data to control them, might previously have seemed the stuff of science fiction. Yet advances in technology have made sci fi novels like ‘The Three-Body Problem’ by Chinese write Liu Cixin – which depicts the fate of civilisations as being almost entirely dependent on winning races to scientific milestones – seem more like works of ‘heightened realism’ than pure imagination.

The arms race to AI supremacy is already well underway. As AI can be used to develop cyber weapons and control fleets of drones for surveillance or attack, the development of AI is increasingly seen as a national security concern. Vladimir Putin claimed in 2017 that whoever becomes the leader in AI will become the ruler of the world, and that claim doesn’t seem entirely far-fetched. China and the US are currently battling it out to be the global frontrunners, with China devoting billions of dollars to its stated ambition of becoming by the global leader in AI research by 2030.

Fears of the potential harm that unregulated AI might unleash is a genuine concern that the EU is keen to address. The EU recently published its approach to AI, which ‘centres on excellence and trust, aiming to boost research and industrial capacity and ensure fundamental rights.’ The EU points to the benefits of AI in terms of improvements in industry and day-to-day life, such as using AI to help treat diseases, reduce pollution and minimise the environmental impact of farming. At the same time, it highlights the need for rules that safeguard people’s safety and fundamental rights, as well as the functioning of markets and the public sector.

It is clear that AI will have an enormous impact on the way people live and work in the future. Indeed it is having an effect already, with AI helping to predict the geographical spread of COVID-19, as well as helping to diagnose the infection and develop the first vaccines and drugs against the virus.

Accenture’s ‘Business Futures 2021’ report reveals that 85% of C-suite executives surveyed believe that increased scientific capability is critical to the future competitiveness of organisations. Just as leading companies in recent years have become tech companies, leading companies are now becoming scientific companies and using science to tackle some of the world’s fundamental challenges. The possibilities are truly endless and with such opportunity comes great responsibility.

At this year’s annual conference from The Chartered Governance UK & Ireland, ‘Governance 2021’ we will be discussing how organisations can safeguard themselves from the reputational and financial risks associated with AI, and looking at how they might harness the opportunities AI and big data can deliver. Join Ivana Bartoletti, Technical Director - Privacy and Digital Ethics, Deloitte and Visiting Policy Fellow, Oxford Internet Institute; Ansgar Koene, Global AI Ethics and Regulatory Leader, EY, Senior Research Fellow, University of Nottingham and Director, EMLS RI ltd and Joao Barreiro, Chief Privacy Officer (Global), BeiGene on 5 July at 14.00.

Maria Brookes, Media Relations Manager, The Chartered Governance Institute UK & Ireland

Highlights from this year's awards Find out more about our upcoming Academy Summit

Search ICSA