Blogs

Who will regulate the future of Artificial Intelligence?

Rachel Rowson

AI and its applications do not recognise geographical borders, so we need to have a global conversation about how to ensure that these technologies are applied for good and to anticipate unintended consequences of AI.

Described by Google’s CEO as “more profound than electricity or fire” and by Elon Musk as having “vastly more risk than North Korea”, it is no surprise that there is a complex mix of hope and fear surrounding artificial intelligence (AI).

At the end of last week, Google released a set of seven ethical principles about how it will use AI. The principles were trailed last weekend and coverage set the tone for the controversy which has followed in media reporting, with headlines including ‘Google tries being slightly less evil’ and ‘Google renounces AI weapons; will still work with military’.

Regardless of how sensible the content of the seven principles may be, from a communications perspective, this is controversial simply because it has been produced by Google. In the networked age, who you are is as important as what you do.

Google has been one of the companies facing the big tech backlash, because of lack of transparency over their ethics board and, more recently, because of a controversial AI project with the Pentagon (which is not going to be renewed).

Extraordinarily, it is currently only technology companies like Google which are directly tackling the ethical vacuum surrounding AI. Google CEO Sundar Pichai wrote in a blog post accompanying the principles “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

Some governments have been slow to embrace AI, possibly because of trepidation about what AI might mean for the future of society as we know it, but many are now entering the accelerating global ‘arms race’ to be a leader in AI.

 

We need to use the international debate sparked by the publication of the Google principles to make sure that appropriate regulation is in place to harness the power of AI before it is too late.

The UK is proud of its leadership status in AI (ranked fourth behind the US, China and Israel). We already have a thriving hub of cutting-edge research on AI, with organisations like the Alan Turing Institute, DeepMind and BeneloventAI, all setting up shop in London. In addition, the Government published an Artificial Intelligence Sector Deal last year, which is worth around £1 billion.

Other countries are keen to get in on the action. In a speech in at the end of March, French President Emmanuel Macron pledged €1.5 billion (£1.3 billion) over the next five years and a strategy to make France a global AI leader. Microsoft and Fujitsu both timed announcements about investments in France to coincide with the speech. Last week, South Korea announced a £1.5 billion bid to seize the lead in AI technology by 2022.

But for all the money that governments are investing in leadership in AI, they are not yet prioritising regulation. The Industrial Strategy announced a new Centre for Data Ethics and Innovation, which will be a “world-first advisory body” which will advise the UK Government on the “ethical, safe and innovative uses of data, including AI”. However, this will not be a regulatory body.

This is why organisations like Google are having to take forward the debate about the ethics of AI themselves. AI and its applications do not recognise geographical borders, so we need to have a global conversation about how to ensure that these technologies are applied for good and to anticipate unintended consequences of AI. It seems that governments will only regulate at the point of crisis, when it can no longer be avoided and when it may be too late to safeguard society from malicious use of the technology.

Multinationals are at the helm of navigating the moral maze of AI ethics. Therefore, it is perhaps heartening that Alphabet (Google’s parent company) has changed its tone on AI in recent months, moving away from talking up their huge ambition for the technology towards a more measured tone.

Writing in an annual letter to Alphabet shareholders, co-founder Sergey Brin wrote in April: “While I am optimistic about the potential to bring technology to bear on the greatest problems in the world, we are on a path that we must tread with deep responsibility, care, and humility.”

I too am optimistic about the potential of AI, but we need to use the international debate sparked by the publication of the Google principles to make sure that appropriate regulation is in place to harness the power of AI before it is too late.