With the ‘future of work’ increasingly becoming our present reality, many organisations are falling into the trap of placing too much trust in technology. When it comes to risk management, it is important that we remember the value of humanity. There are some things robots and machines can’t do – humans must always be ready and willing to pick up the slack.
As we all know, the nature of risk has changed in recent years. Not only is our conception of it much broader than it was 20 years ago, but we understand and accept that it exists at all levels of our organisations. Naturally, we have adapted our risk management practices accordingly.
I think most would agree when I say that technology has been a pivotal catalyst in this transformation. AI and other devices have increased our access to operational data and accelerated our analytical capacity. As such, we have a much larger pool of resources from which we can assess risk along with a stronger ability to draw insightful implications from these resources.
However, as is becoming the trend with new technologies, organisations seem to be over-reliant on machine learning when it comes to risk. Technology is not meant to be a substitute – it is an addition that is designed to increase efficiency and effectiveness. If your organisation is entirely reliant on technology in its risk management practices, then you may be exposing yourself to damage that your organisation cannot foresee.
I think this notion is best reflected in a quote from James Dunn at KPMG: “To a large extent, the insights still come from humans – as do the risks.” What Dunn is getting at is that there are two ways in which human beings are involved in risk that technology cannot replicate nor compensate for. Human beings create risk and human beings draw conclusions about risk.
Creating Risk
Firstly, technology cannot stop risks from emerging or coming to fruition. Risk, primarily, stems from human conduct. No matter what technological devices you have in place, all of this will be undermined if your organisation has a culture where employees operate without regard for the consequences of their behaviour.
Take the recent Royal Commission as an example. The Big Banks and other financial institutions are well equipped when it comes to risk management and assessment. The problem was that they were failing to properly account for behavioural risks; the conduct of their employees was flying under the radar.
The best way to remedy human-induced risk is not through technology but through organisational culture. PwC’s November Edition of Audit and Risks Insights notes that an open and collaborative culture which encourages employees to stand up when they think something is not right is the best way to stomp out misconduct before it spirals out of control.
In essence, human-to-human interaction is pivotal to your organisation’s management of this risk. This is something technology cannot account for.
Analysing Risk
“Machines do analytics. Human beings do analysis.”
That’s the summation of Josh Sullivan, Head of Data Science at Booz Allens. I think it encapsulates the point being made here perfectly.
While machines and AI can collate mass data and find trends, analysis is more than this. It involves imagination and cognition. I want you to picture a connect-the-dots puzzle. While AI gives you an intricately detailed set of dots, it takes a human being to actually pick up the pen and connect them.
The intuitive interpretation of risk data is something that we can try to recreate in AI but there is always a chance that a fresh set of human eyes will bring something new to light.
Concluding Remarks
It is understandable that much technological innovation is being directed towards risk management. It is, undoubtedly, an important area – organisations live and die by their ability to determine the risks that they are exposed to and how these forces threaten to undermine them.
However, executives cannot forget the power of their people. Technology can never be a complete substitute for humanity. At their core, organisations remain people driven. This is especially the case with something like risk.
There is only so much AI can do. Human beings will always be capable where AI is incapable.