Artificial intelligence: Essential ingredient but no panacea
Michael McCabe says that although artificial intelligence brings real benefits in the security and resilience fields, it is unlikely that the human factor will become obsolete
Algorithms that monitor incidents globally provide much content, but lack the tactical perspective. Such platforms cannot distinguish subtle intricacies of language, different place names given by different local or ethnic groups or even plain spelling mistakes
(123rf / Bluebay)
Artificial Intelligence, or AI, is bringing human scale intelligence to everyday technologies and, in coming years, will have an ever increasing influence within the security and crisis management environment.
One example of this is advanced video analytics software, which allows security analysts a technological edge that no surveillance camera alone can provide. Analysing video footage in real-time, minimising human error through intelligent video analytics bring real advantages to those charged with security or resilience.
However, having studied human behaviour and operated in differing cultures and environments globally, I find it difficult to agree with claims that AI algorithms will consistently and accurately predict human behaviour and psychology, or that AI will soon replace skilled security analysts.
Society is made up of individuals and human beings who, by their very nature, are irrational, being composed of a global tapestry of differing cultures, each with their own unique interpretation and reaction to events. With increasing globalisation and the building up of individual ethnic or religious communities within a society, it becomes even more difficult to predict the reaction an individual in that society may have.
A simple example of the failure of the ability of AI to predict human behaviour is the stock market. No algorithm has yet had the ability to predict accurately the gyrations of the world economy, especially when human factors like market manipulation, natural disasters or government fiscal intervention occur.
Although complimenting and greatly leveraging the analyst’s ability to visualise a situation and receive timely and accurate data, an experienced security expert will always be needed to provide oversight of a situation and be able to assess what an algorithm or software system cannot. How will a software system predict a multi-ethnic refugee crowd’s reaction to a food drop in the Congo, complex tribal warfare over cattle in the Sudan or innovative smuggling techniques taken by narcotics gangs in Holland?
Much of today’s terrorism and violence is driven by raw emotion based on hatred and irrational fear. The willingness to kill oneself and others for the sake of a religious cause is an issue that humans, let alone AI algorithms, find hard to understand or predict. The fact that human beings are intrinsically different, and in many instances, culturally alien from each other, make the need for ‘boots on the ground’ intelligence and analysis all the more vital. If an armchair expert based in London cannot get it right, how is an algorithm programmed by like-minded individuals expected to be any better?
Once more, algorithms that monitor incidents globally provide much content, but lack the tactical perspective. Such platforms cannot distinguish subtle intricacies of language, different place names given by different local or ethnic groups or even plain spelling mistakes. Owing to this, an incident can be reported in the wrong locations or even country. Algorithms cannot provide information on interpersonal relationships and networks of interest, nor differentiate between false or disinformation and genuine reporting. A knowledgeable human analyst would be able to discern certain clues or identify red flags that AI would simply not compute.
A complimentary fusion of technology and human skill is still the best course of action for any operation with a stake in risk mitigation and security (123rf / ???? ???????)
In terms of crisis management, there is a myriad of factors to assess when dealing with a specific situation. As emotion has the ability to override logic, when a crisis gets to a point of spreading irrational fear, it becomes extremely difficult to manage, let alone predict. This was evident during the recent Ebola and Zika outbreaks. An effective crisis management team learns to react quickly to differing situations and use their expertise, experience and in many cases intuition to overcome a crisis. This is a unique set of skills that only comes through reality based training, knowledge sharing and familiarity.
Furthermore, for every measure there is a counter-measure. What is to stop hostile groups from hacking or overriding a software system and indeed commanding the AI system to turn on its original owners, whether that be opening the gates of a sensitive establishment, re-directing a drone carrying aid or a vehicle shutting itself down?
This is not a theoretical question. We have already seen cyber-attacks on nuclear power stations in Europe this year and also a US drone reportedly being overridden and landed in Iran in 2011. Humans are, by their very nature, creative and adaptive to problems or issues. While computers and the associated machines that run on them are technological breakthroughs, they themselves for the time being are not able to anticipate accurately human innovation or capacity for creativity.
Human decisions are ultimately shaped by nurture, narrative, vision and environment. Although the continued development of AI will bring about improvements and advantages to crisis management and security in the coming years, it is important not to underestimate nor denigrate the human factor. A complimentary fusion of technology and human skill is still the best course of action for any operation with a stake in risk mitigation and security.
Michael McCabe is the Chief Executive Officer of Intelligence Fusion. He has extensive international experience in security consultancy and crisis management. This article appears in issue CRJ 11:4.