OODA Members are highly informed on the state of Artificial Intelligence, with most already applying some component to business operations and all benefiting from the use of AI in their personal lives via popular applications from Google, Amazon, Microsoft, Facebook, Apple and others. The fact that AI is here to stay is clear. But where is it going in the near future? This special report was produced to shed insight into this critically important megatrend.
We start with a definition of AI:
Artificial Intelligence (AI) is the application of thinking machines to real world problems.
This definition is unique. We like it because it focused on practitioners. Once you take a practitioner’s view you see AI is really about far more than algorithms. There are a wide range of technical and non-technical factors that must come together to deliver results.
Examples of tech components and non-tech components of real world AI solutions follow:
Tech Components:
- Analytic Algorithms (including Machine Learning, Deep Learning)
- Natural Language Processing
- Robotics
- Computer Vision
- Data Management
- Sensors
- Hardware architectures
- Technical security measures
Non Tech Components:
- New business strategies
- Cybersecurity policies
- Business risk policies
- Ethics
- Legal and regulatory regimes
- Training and testing
- Operation and maintenance
- Hiring, promotion, career management
The practitioner’s view of AI can help in thinking about the future trends that can lead to the most impact and also trends that may be of concern to decision-makers.
Observations on Artificial Intelligence Trends:
- With a wide range of business models returning profit now, all indications are AI will continue to improve and permeate even further into the business world. But there are some moderating factors including challenges with meeting compliance regimes.
- Businesses can leverage their own AI capabilities by hiring developers, but most will be able to take quicker advantage of AI by using SaaS services with AI embedded.
- The evolution of AI has been accelerating due to its coupling with incredibly low cost and agile cloud computing.
- Creators use a “generate and test” approach to creating functionality, with no accepted protocol for security or testing in AI. This is a huge negative (we provide more insight on the security of AI below).
- There are four major problems with AI today: 1) Some of the most capable AI is not scrutable (you can’t see how it works), 2) AI can be easy to deceive, trick or hack, 3) AI can be unfair, unethical and unwanted, and 4) AI can be leveraged by competitors and even criminals to your detriment.
- AI, especially Machine Learning, is playing a huge role in modernizing the cybersecurity industry.
AI is also being used by cyber criminals, with many in the security community predicting AI enabled malware coming soon.
- AI can be easier to deceive than current computer software (see Generative Adversary Networks: A very exciting development in Artificial Intelligence).
- There are many lessons that can be learned from others on ways to improve your corporate governance over AI including ethics around AI.
- Nations have built AI strategies. Short version of approaches: In totalitarian countries like Russia and China AI is seen as a weapon for dominance. In more open nations, national strategies are seen as a way of improving society through AI while protecting privacy and mitigating ethical concerns.
AI and Cybersecurity
With AI being used by both defenders and attackers in cyberspace this is a key trend to track. Many AI solutions are showing up in our consumer IT devices. Others are being sold to enterprises. In every case, we can expect thinking adversaries are seeking ways to leverage the AI to gain unauthorized access to systems. Our recommendation is to ensure all AI solutions are well tested before using them in enterprises.
The Cybersecurity of AI
This topic is often confused with the more common application of AI to cybersecurity. This is different. This is the critically important topic of protecting your AI from attack. This includes means to protect your data before AI operates over it, protect training data for models, and protect the models themselves. This also includes the closely integrated terms of AI bias and AI explainability as well as the ethics of AI. For more on this topic see the Executive’s Guide to Security of AI.
AI in Biology, Healthcare and the anti-Covid-19 Fight
AI has long been contributing to advances in biological sciences, especially around data analytics associated with the many data intensive explorations underway in the field. AI has also long been contributing to the digital transformation underway in the healthcare sector. We are early into the anti-covid-19 fight, so it remains to be seen how big a role AI solutions will play in optimizing treatments against this virus, but we do expect it is in use in multiple activities and will be tracking that closely. For more see:
Open questions decision-makers should track include:
- Are there AI solutions that can give my business a competitive advantage? Are competitors leveraging AI in ways that will surprise?
- Will job displacement caused by AI be a crisis? Will government put regulations on companies because of this?
- How, specifically will job displacement impact my workforce?
- How can we ensure the AI we are using will be used in ways our customers regard as ethical?
- Can behavioral analytics enhance security?
- How can machine learning improve cybersecurity?
- How can AI contribute to the rise of biological sciences and new focus on healthcare and patient outcomes?
Due Diligence Assessments and Artificial Intelligence
The trend of Artificial Intelligence is an increasingly important element of corporate Due Diligence since it is so disruptive business models.
- On the sell side: Firms should ensure their use of AI is done securely and ethically (see our special report at OODAloop.com on “When AI goes Wrong” for insight into issues and mitigation strategies). This applies to any firm that uses any AI enabled capability. However, firms that produce AI (vendors) should pay particular attention to this, it will make a big difference in how well a firm will be valued.
- On the buy side: Buyers should pay particular attention to the use of AI in the target to ensure a well thought out architecture that mitigates risks. External and independent verification and validation of AI ethics and security policies and practices are key, as well as the degree that the target is complying with appropriate compliance regimes.
Strategically, the acquisition of technology firms is an art requiring assessment of how unique the capability is and how much in demand it will be in the market. We provide a special focus on due diligence for artificial intelligence companies via our parent company, OODA LLC.
Additional insights to inform your business strategy in an age of digital transformation can be found in our OODA Members Resource Page.
Additional References:
About the Author
Bob Gourley
Bob Gourley is an experienced Chief Technology Officer (CTO), Board Qualified Technical Executive (QTE), author and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. CTO of OODA LLC, a unique team of international experts which provide board advisory and cybersecurity consulting services. OODA publishes OODALoop.com. Bob has been an advisor to dozens of successful high tech startups and has conducted enterprise cybersecurity assessments for businesses in multiple sectors of the economy. He was a career Naval Intelligence Officer and is the former CTO of the Defense Intelligence Agency.
Subscribe to OODA Daily Pulse
The OODA Daily Pulse Report provides a detailed summary of the top cybersecurity, technology, and global risk stories of the day.