With Davos 2019 taking a closer look at Globalization 4.0, the ‘Global Markets Forum: The future is now’ blog series attempts to untangle some of these challenges. In his blog, Carl Ward, from Accenture, shares his views on responsible AI and explores the ways in which we can implement it ethically to realize its best potential for society.
- Artificial intelligence (AI) exerts a profound impact on our society. And as that influence will only increase, it needs to be implemented responsibly.
- The best way to ensure the responsible implementation of AI is to create an industry of safety around AI, supporting our workforce and considering the special role of government.
Industries and governments need to work together to maximize the benefits of AI and at the same time, protect people and society from potential harm.
AI promises to deliver significant benefits to society – yet it also has the potential to do serious harm. How do we make sure that AI is implemented with humanity in mind?
AI has developed considerably in the last several years – delivering impressive results that range from language translation to autonomous driving. This change has been driven by two major shifts: the increase in compute capability that enables deep learning, and the increase in data to provide training information.
Future impact of AI
We expect AI to continue to have a major impact; its capabilities will increase as techniques and technologies evolve. Many forecasts predict that AI will be able to take over current human tasks progressively over the next decades.
At some point, we will reach the Artificial General Intelligence (AGI). This is where AI can operate equivalently to a human. We are realistically not yet ready for these disruptions in society.
There are challenges with AI, and it has already created some undesirable outcomes.
When we use data to drive AI learning, we naturally sustain existing bias which is unacceptable. We know that AI cannot explain its decisions – it does not currently work on causal models or rules that would provide a level of explainability. And AI does not implicitly have an ethical framework to govern its decisions; it will essentially do what it has been trained to do.
All of these challenges have led to real issues – the continuation of racist prosecutions driven by AI; the bot that learned from humans to be racist and sexist; and the algorithms that became aggressive through reinforcement learning. As AI becomes more embedded in our society the threat of harm will increase.
Implementing responsible AI
The potential for both benefit and harm from artificial intelligence is obviously significant – so what can we do to protect ourselves from harm and realise the best potential?
Firstly, we can learn from history; new technologies have always created new risks. For example, the introduction of railways introduced risks of rail accidents, and it took many years for the current practices and standards to evolve. Some were based on careful thought and analysis, and some, unfortunately, resulted from accidents.
We will need to create an industry of safety around artificial intelligence – and safety includes physical, as well as protection of our welfare and society against bias. Our challenge will be how quickly we can evolve this while technology races ahead.
Watch: Accenture – AI and the role of government
How will AI affect the workforce?
Secondly, we need to broaden the debate with the community on the challenge of workforce change.
I believe that AI will create more opportunities than it replaces in the medium-term, yet these new opportunities will look nothing like the roles that are replaced. The promise of AI is fabulous until you happen to be the person whose job is replaced and can no longer support yourself or your family.
As a community, we cannot ignore the significant impact on individuals that may result from AI. We need to work through how we support people going through changes – including re-skilling, a focus on lifelong learning, and de-stigmatising career transitions.
Read more of Carl Ward’s views on AI and technology
The role of government in responsible AI
Thirdly, we should recognise the need for governance and the role of government. Government is expected to make sure we remain safe and protected – the essential purpose of government to protect its citizens.
Watch: Refinitiv tackles the world’s critical sustainability and environmental issues at Davos
I believe there is a special role that government will play in making sure that AI is implemented responsibly, and helping citizens that are impacted by the reality of changing workforce needs.
The responsible answer would include some form of regulatory control. This would be something that includes industry and government working together to make sure that the benefits of AI are maximized, but the same time protecting people and society from potential harm.
The balance between regulation and industry self-governance will depend on policies and politics within a country – but there is no doubt that we need to start this debate.
Discover more about the impact of AI on society and the public sector
Get more Davos 2019 coverage
From 22-25 January, join the GMF as our Reuters editors bring you unparalleled live coverage, exclusive Live Chats and latest developments from the World Economic Forum.
See our full schedule of this year’s guest chats. If you are already an Eikon subscriber or have an Eikon Messenger standalone account, simply request GMF access.
Don’t have Eikon Messenger? Sign up, it’s free. Registration is easy and takes only two minutes to complete.
Watch: The Future is Now Series – The Global Markets Forum at Davos 2019
Eikon users will also be able to access exclusive coverage and top stories from Reuters as well as streaming video content available in our [DAVOS] App. Don’t have Eikon? Register for a free trial.