Start your day with intelligence. Get The OODA Daily Pulse.
Whether you are just overwhelmed with data or just curious about what you will learn, you may be feeling the impulse to jump on the artificial intelligence (AI) bandwagon. Before you go too far down the road, please consider this Top 10 list of the most common mistakes managers make when building an AI project. This comes from long, hard lessons learned across multiple missions and IT clients over the years. In order, we start with:
Mission owners have a lot to do. It is usually the most annoying or time-intensive tasks they want to automate the most. I never begrudge someone who is trying to better optimize the cognitive talent of their team. However, usually the annoying tasks are not good candidates for AI projects. It is incumbent on the project manager to ask whether the problem being addressed is large enough for an AI solution.
Start asking questions about the business process they want to optimize and the data they need to run those processes. If your data measures in the megabytes, your problem is probably too small. That being said, there may be complexity in those megabytes and it takes a large staff to process it in a timely way. If the level of effort starts to approach 10 or more staff members, then you may have a good problem for AI to tackle.
In Hollywood, they buy pilots for lots of shows. Very few of those shows get picked-up to become a series; around 10% per year. Hollywood’s model is completely appropriate for a creative process. It is not appropriate for software development where the return on investment (ROI) may not surface for years.
A common impulse in every organization, but in government particularly, is to pilot an AI program to “just see what happens.” I recommend resisting this impulse at every turn. It can lead to either meager investments which do not allow the program to fully grow or it results in a “one and done” project. Plan and program for a multi-year project in order to fully test everything you are trying to do.
If it is not possible to build a multi-year program, seek shelter in an innovation program held by your organization. A well-operating innovation program will allow you the time and space to get your program off the ground and argue for more resources along the way.
Showing improvement over time is the best way to preserve the future of your AI program. Project managers often overlook the “as is” and want to rush toward the “to be.” While the “to be” is alluring, you have to secure resources in the now and the future.
Prepare for that future by collecting metrics on how business was done before and what outcomes resulted. Be open and honest about the metrics you are collecting and understand the limitations of the data. No metrics program is perfect. Collect what you can over a reasonable time period where there is as little turbulence in the existing program as possible. If the mission your project supports is new, find other organizations who have been executing a similar or the same mission and use their metrics as a benchmark.
Once you have a baseline from the existing program, be sure to replicate their success metrics in your program. In other words, if the legacy program is measuring the power of its engines in horsepower, then measure your new rocket in horsepower too. If you have new metrics which out-perform the legacy way of doing business, by all means, use those metrics.
Be prepared to reference these metrics when you report program results. It is considerably more difficult to go back to an old way of doing business when you can show an appreciable mission execution improvement or cost savings with your AI program.
We always want to use old data when we have to answer new questions. Even when we know the data is not germane to the question posed, we are comfortable with using data we know and trust before we go other places. This is a long understood and prevalent bias in data analytics. One need look no further than the drunk and the streetlight allegory to understand the folly of repeatedly using available data.
Spending the time to understand your problem and find the data pools you need is time well spent. Be sure to interview prospective users and ask what kinds of data they would like to see in the new solution. Do your own market research to see if there are data pools which may add value. If the legacy data is “must have” for the customer, suggest adding new data sources which can enrich the old data.
It is not worth your time or reputation to work on projects which are not meaningful. Moreover, resource constraints—whether personnel, funds, or infrastructure availability—are a reality. Good stewardship of those resources demand scrutiny over allocation decisions. Therefore, as an innovation professional, you need to learn the art of saying “No.”
The best downselect criteria for a project is ensuring the question posed makes a meaningful impact on mission. A project which is “meaningful” usually addresses a:
To illustrate, let’s say a retail chain poses the business question, “How can we better estimate our inventory needs against customer demand and purchase sufficient supplies to meet that demand?” To wit, you design an AI project which optimizes the inventory management of the retail chain to enable just-in-time delivery. With a meaningful metrics program in place, the ROI is immediately apparent to franchise owners and the customers who buy their inventory. This, of course, is to the benefit of a core business activity; sales.
This is a well-qualified and meaningful AI project.
A less meaningful question would be, “How can we sell our stuff better?” On its face, it is not clear how answering this question meets any of our above criteria. Say “No” to this client or work with her/him to better refine the question.
Some in the AI market want to just buy an off-the-shelf AI solution, deploy it, and reap the rewards. Organizations which have difficulty focusing and following through on their technical projects fall into this “fire and forget” trap frequently.
While there are rudimentary solutions (like robotic process automation), which are better suited to this approach, most AI projects require care and feeding far beyond initial implementation. This stems from the simple reality that data sets change, algorithms drift, and mission needs shift. You must pay attention to all these moving parts over the lifecycle of the program.
A testing regime for data model accuracy is one way to both monitor the health of your program and implement fixes. Testing regimes are not only helpful to ensure your AI still performs its intended purpose, it also helps tackle ethical issues; more on that below.
As you take ownership over your AI project, also assume a posture of good stewardship. In other words, when you see issues arise, investigate the issue, implement a fix, or suspend the function. It is better to pause compute operations than to deploy inaccurate results.
Whether you are outsourcing your AI solution or relying on an internal development team, the skills and talent mix on that team is all that matters. Good ideas, poorly executed are worse than having no solution at all.
As you are designing your project, do an honest assessment of the abilities of your internal resources. Moreover, think about how your project impacts the totality of the organization and whether it makes sense to use those internal resources. For example, if your project aims to reveal a lot of waste in your organization, the technical team building the solution may have a vested interest in hiding incriminating insights. In this case, outsourcing the solution is probably best for all parties.
Estimates vary, but there are somewhere between 1,300 to 2,000 AI companies in the U.S. alone. Given this wide range, if you are seeking help from the outside, market research is critical. Go to tradeshows, read reviews from trusted sources, and, most importantly, get technical demonstrations from vendors. Be sure to ask lots of questions and take notes as you gather answers. With enough time and attention to your needs, the right vendor will surface. Lastly, trust your instincts when you do not have a good feeling from a particular vendor.
All ethical dilemmas are situational. As evidenced by the many, many, many ethical frameworks for AI, there is a lot to think about when you are building an AI project. There is a lot of discussion around AI ethics, but not a lot of practical advice on how to employ ethical AI practices. As the AI field matures, ethical challenges will rise in complexity.
Given this trend, you should think about the ethical needs of your program at design phase, not after it is up and running. A few ethical questions to consider as you build the program:
These are just some broad, high-level questions which should be asked as you start the project. Practicing sound ethics is a journey not a destination. Be sure to check-in with your staff, ethics professionals, and yourself on a regular basis on the ethical choices and practices of your project.
A storied AI practitioner once said to me, “We are talking about a field of computer science based on convolutional mathematics and that is hard . . . But the people part is harder.”
Any AI initiative is doomed to fail if it does not prepare leadership or the workforce for what the technology will change in their day-to-day. Put yourself in their shoes for a moment. The fear of having your job outsourced by technology is real. Leaders fear taking on too much risk in an AI market filled with hype. Policies and practices are not always aligned to what you are trying to do with AI. All of these barriers to change are driven by people and their entrenched interests.
The most proactive thing you can do as you start your AI project is to communicate with stakeholders often. Hide nothing about what you are doing from anyone. If someone makes a critical comment or has less-than-constructive feedback, use it as an opportunity to engage that person on the merits of their concerns. Assume that everyone in the organization has the same or similar concern. If anyone is brave enough to voice their opinion, that is extremely valuable input for you and what you are trying to do. Remember, these are the people you eventually want to use your platform. Listen to what they have to say.
On a related topic, watch out for the engaged leader who overpromises on the technology to key audiences. Secure and spend the time with these leaders to help them understand how AI can help your organization. Give them the talking points they need to convince their subordinates, peers, and superiors. Similarly, beware of leaders who will not invest the time and would rather put an AI bullet on their resume and move on. Engaged and educated leadership is one of the most important elements of any successful AI initiative.
The most common mistake in any technical project is thinking about the customer last. It is easy to become enamored with the technology and forget about the purpose of the project.
Never lose sight of who you are serving and why you are doing it. If necessary, print off pictures of your customers and write their mission in big letters on the whiteboard. This may sound a little over the top, but it is absolutely vital with the passage of time, competing needs for your developers’ talents, and the risk of the tactical overwhelming the strategic.
If you have a discrete, specific goal you are helping the customer achieve, write that down and broadly share it with the team. Use that goal as a mantra to help the team stay focused.
In practice this technique has two parts when an engineer has a change request:
These methods help reframe the team’s value in terms relevant to the customer’s success. Winning the work is hard, keeping it is harder. The best way to continue the work is to make your customer’s success criteria the same as your success criteria.
The OODA leadership and analysts have decades of experience in understanding and mitigating cybersecurity threats and apply this real world practitioner knowledge in our research and reporting. This page on the site is a repository of the best of our actionable research as well as a news stream of our daily reporting on cybersecurity threats and mitigation measures. See: Cybersecurity Sensemaking
OODA’s leadership and analysts have decades of direct experience helping organizations improve their ability to make sense of their current environment and assess the best courses of action for success going forward. This includes helping establish competitive intelligence and corporate intelligence capabilities. Our special series on the Intelligent Enterprise highlights research and reports that can accelerate any organization along their journey to optimized intelligence. See: Corporate Sensemaking
This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process. This includes a special guide for executives seeking to make the most of AI in their enterprise. See: Artificial Intelligence Sensemaking