When the UK AI industry gets together later this month at AI & Big Data Expo, we expect big things about its growing influence on key issues such as cybercrime, process automation, retail, healthcare and customer experience – all areas in which we have clients actively working in.  However, amongst this anticipation to hear about new and exciting innovation, we hope to hear that ethical AI is woven throughout all aspects of the conference agenda.

This is a topic that is gaining momentum, but with AI still something of a catch-all term, many industries have not yet moved on to this crucial next stage of the debate.

To provide some context, Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, is the author and co-author of two best-selling books[1] on one theme: the fourth industrial revolution (4IA).

The first three revolutions all had some form of production as the benefit. Number one combined water and steam to create mechanical production, and the second was led by harnessing electricity to allow us to mass produce. The third mobilised electronics, computers and IT to automate production.

We have, for a handful of years, existed in the fourth revolution – and it’s one where large-scale iteration in production isn’t the single outcome. This revolution is really quite different. Professor Schwab wrote that this “is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.”

It is impacting almost every industry in every country – developed or otherwise – and creating change at a speed humanity has never before experienced.

41A has opportunities with risks which need to be proactively managed, and artificial intelligence (AI) is increasingly the glue that enables various technologies, devices, applications and systems to deliver the opportunities we are only just starting to realise. At the most fundamental level, AI is already showing signs of improving our world, extending to people’s lives, wildlife, as well as our climate. It’s truly democratic in its potential and applications. And they are vast. PwC research says that AI will provide $15.7 trillion in global economic growth by 2030 and in the UK it’s expected to contribute an additional £232bn by the same year.

PwC breaks down the scope of AI into four key areas:

  • Automated intelligence: Automating manual, routine tasks;
  • Assisted intelligence: Performing tasks faster and better;
  • Augmented intelligence: Helping people to make better decisions; and
  • Autonomous intelligence: Automating decision-making processes without people.

Across these four aspects of AI, commentary centres on ensuring ‘safe AI’ – where those key stakeholders I mentioned above are collectively responsible for ensuring it’s developed and implemented to align with general human values.

One of the key risks focuses on social ethics. And it’s a broad issue that is increasingly and encouragingly gaining greater traction and visibility in international discussions in the US and in an upcoming Bias review by the UK Centre for Data Ethics & Innovation. Businesses, investors, policy-makers and the public are all involved with responsibilities. We must collectively learn from the recent scandals which stick in the mind, such as Cambridge Analytica, something which author Bernard Marr has explored in a recent article for Forbes.

However, much like big data in the last decade, a lot of the talk around AI in B2B applications is what it “could do” – it’s largely hypothetical and all about the possibilities. And demand for those possibilities is high. As recently as December last year, $322 million was raised for B2B startups in just one week. Time will tell, but we now need to move on to ‘how’ it should be adopted – and I think responsibility plays a key role in this.

[1] The Fourth Industrial Revolution (January 2017) and Shaping the Future of the Fourth Industrial Revolution: A guide to building a better world (November 2018)

GET IN TOUCH