Start your day with intelligence. Get The OODA Daily Pulse.
Within the domain of technology we are often caught by surprise. A technological innovation captures our attention or provides such incredible value that it is rapidly adopted and becomes an essential component of your business or greatly impacts your business by influencing the values and behaviors of your customers.
For example, while the science fiction authors of decades-past might have predicted a pocket-computer, none could have accurately forecasted the impact that mobile phones and data networks have had on modern business and society and the rush to adopt “mobile first” and “BYOD” strategies in the enterprise.
It is a rarity that we can look at any one technology and know in advance that it will be incredibly impactful. Artificial Intelligence is such a technology and that recognition affords us an opportunity to develop and deploy it securely. This is a call to action to do just that.
The emerging opportunity for the business use of AI is not one in which sentient technologies replace your board of directors. The opportunity space is much more practical as Machine Learning and narrow AI automate business processes to help us understand and act on our data with a tighter OODA Loops resulting in greater opportunity and appropriately managed risks.
For a thorough review of the history of AI and its business implications, please review the OODA Research Reports entitled A Decision Makers Guide to Artificial Intelligence and Artificial Intelligence for Business Advantage.
AI is a technological tsunami that is coming for your business whether you are ready or not. The return on investment is too great to pass up and affords your competitors too much advantage to allow for your organization to ignore its potential and not deploy AI in your enterprise.
Even in these early days of AI emergence, Gartner reports that enterprise spending on AI already exceeds $1.4 trillion USD and has grown at a rate of 270% over the past four years. Large companies like Shell have adopted what they call an “AI First” strategy in which they look to address essential business opportunities by first looking at how AI and Machine Learning can be applied.
In a McKinsey global survey of 2000 participants across 10 industry sectors and 8 business functions and varying business sizes, McKinsey discovered that:
AI will either define or destroy many business models in the coming years affording a level of automation, data processing, and algorithmic learning unprecedented in the history of technology and business.
Another challenge that many enterprises face around AI deployments is addressing issues of ethics, explainability, and AI compliance. Having a robust AI security strategy is a precursor that positions the enterprise to address these critical issues.
“History does not so much repeat as echo.” – Lois McMaster Bujold
A decade ago, I hosted Dr. Steve Lukasik at my home for a barbecue with other colleagues. For those unfamiliar with Dr. Lukasik, he was the Director of DARPA when the original Internet project was established and funded. As he humbly inscribed on my “Early Map of the Internet” poster – “It wasn’t my brilliance, but I did sign the checks”; Dr. Lukasik was the earliest financial backer of the Internet. In a conversation regarding cybersecurity, he had a perspective that few of us have been in a position to reflect upon. “Had we known the Internet would be so important, that it would be the hub of global communications, finance, and so many other infrastructures; we would have thought about security at the onset.” he noted.
Too often, security is an afterthought in our technology development planning and we must confront the costly and less effective reality of attempting to secure the technology post-deployment. Addressing security concerns at a late stage greatly diminishes the risk mitigation strategies available to your team and impacts your overall security posture.
Given that we know AI will be impactful in the enterprise, modern organizations would do well to develop an approach for securing their AI assets now.
Here are the four core components you should include in your AI security initiatives.
Artificial intelligence technologies rely on a robust array of infrastructure components for successful operation. As with other enterprise technology deployments, it is important that the infrastructure used to operate your AI platform is secure to prevent unauthorized intrusion.
For enterprise technologists and cybersecurity experts, this challenge highly mirrors traditional risk management approaches to include host system/server hardening, use of encryption for data at rest and in transit, and red teaming to exposure and mitigate vulnerabilities and reduce the attack surface of the platform.
Most modern day AI and Machine Learning implementations are prediction machines and business process automation platforms using algorithms and data science as the core operational component and technological differentiator. These algorithms are subject not only to adversarial targeting through direct compromise, but adversary modeling to create unexpected outcomes, and without proper validation and integrity checking; can also be subject to developing bias.
The OODA Research Report on When AI Goes Wrong provides multiple examples of this issue.
Securing your algorithms requires an approach that is a blend of data science and traditional red teaming designed to look at the AI’s technical implementation in contrast with the AI deployments desired business logic and outcomes. It means stress testing your algorithms to ensure viable future performance and thinking through novel ways in which the algorithms can be influenced, manipulated, or directly targeted.
“One of the most important security issues around Artificial Intelligence is protecting data used to train algorithms. We can use the best practices of good digital system hygiene to protect this type of training data but the protection of training data must be a priority at the beginning of the design of an algorithm” – Congressman Will Hurd
Data is the fuel from which we derive value from our AI platforms. It is important that the enterprise ask questions around the integrity of the data utilized. For example, you should ask:
Does your data accurately represent the business problem you are trying to solve for? For example are you making any assumptions based upon missing data?
Has the data been compromised by an external attacker?
Can the data be compromised or changed by an internal or external malicious or accidental attacker?
Do you have a strategy for change control or iterative improvements/modifications to the data that allows for algorithmic reversion in the event of an identified issue? Some Machine Learning platforms add variables and manipulate data in ways that are not decipherable by computer scientists. If you encounter an issue do you have a reversion strategy or do you have to start the learning back from inception day zero?
Many AI platforms rely on external data sources and modern enterprises must ensure that these external data dependencies do not create a risk to your business outcomes. The potential for external data disruption could manifest itself in two primary ways.
Adversarial ML algorithms could target your own algorithms to drive them towards a desired outcome. For example a feeder algorithm designed to tag objects in photos might intentionally misrepresent those objects to “trick” your AI into making false assumptions or take the wrong action based upon the deceptive data.
Influence campaigns by sophisticated human attackers will attempt to pollute public data signals to influence algorithmic actions. For example, imagine the public stock trading algorithm that includes sentiment analysis from social media being targeted with an influence campaign similar to those we are see targeting public elections in the United States and elsewhere.
Your business must develop a full understanding of your external data dependencies, evaluate the potential for those dependencies to be compromised or influenced, and formulate a strategy for resiliency. This involves a very healthy dose of red teaming to see beyond the obvious vulnerabilities and identify long-term risks.
“The best time to plant a tree was 20 years ago. The second best time is now.” – Chinese proverb
The most critical component of your AI strategy is to develop an immediacy of action for AI security risks. Too often we rush towards business outcomes and deploy new technologies without developing a risk and vulnerability management strategy into the program from the onset. Taking the time to build your AI security model now is vitally important.
At a minimum your AI security strategy should provide for technical and business-centric red teaming. These red teams will require a blend of cybersecurity professionals, data scientists, and business analysts that can take a holistic look at how to develop robust and resilient AI approaches for the enterprise. It can also be useful to think through these issues with a HACKthink framework.
We’ve highlighted four key areas that must be addressed in all AI and ML security strategies so focusing on those areas is a great first step. If you are planning for the importance of AI in your enterprise, you should also be planning to secure it.
Note: A video presentation on this topic can be found on our OODA Video On Demand Page