Start your day with intelligence. Get The OODA Daily Pulse.
In December 2022 we wrote about how many in the OODA network had been examining the new capabilities of ChatGPT, concluding that we seemed to be witnessing another inflection point in how computers support humanity. To help accelerate our community’s ability to consider which use cases new LLMs are best at addressing we began exchanging information with leaders in the intelligence community on things ChatGPT could do. Additionally, we created a web application that leveraged the OpenAI APIs called Unrestricted Intelligence.
Now after a full year of running the Unrestricted Intelligence application we have over 1100 registered users of the application. Over 20,000 questions have been asked of the system. Building the application Unrestricted Intelligence generated several lessons we provided in early 2033, including:
Government Enterprises Are 25 Times Behind The Best in AI: This is just a rough estimate of course, but here is the logic. The average company made up of knowledge workers is about 5 times behind the best Silicon Valley companies in use of AI. The average government enterprise is about 5 times behind the average company. So 5×5=25 times worse than where they should be. Government agencies should move faster to get AI into their enterprises.
The Word Decontrol Should Be Part of AI Discussions: Related to the fact that enterprises need to move faster is the reality that too many regulations, rules, guidelines, standards, mandates and even laws apply to smart use of AI. We need to go faster and that means we need to think of ways to reduce the control. We need to decontrol.
The Future of LLMs is going to include much more tailoring and contextualizing: The community is building new architectures already, including approaches like Retrieval Augmented Generation (RAG). There will also be a huge need for more persona/role-based LLM trained to provide insights that matter. Customization is huge.
Beware of Dependencies: At this stage in the evolution of LLM we have few choices for providers. And it seems that no matter what the path picked there will be dependencies that are going to have your solution at the mercy of others. Just realize that risk and decide what to do about it.
Do Not Seek Perfection: Too many AI theorists have been advocating for perfect transparency into AI models. Many also argue for perfect knowledge on what data the models are trained on. Others argue for LLM that have zero hallucination. There are many risks to using AI, including those we just mentioned. But many use cases can be addressed without perfection. Understand the nature of your solution and the risks and decide if they are appropriate for your use cases.
A Coming Cambrian Explosion: A Cambrian Explosion in new capabilities is coming. I am not a developer, but by using no-code tools and learning to send and get results via computer interfaces to an AI provider (OpenAI via their API), I was able to take my domain knowledge and create a tool of use to myself and my community. There is no barrier to entry here (if I can do this, anyone can). Millions can build micro-sites or personalized AI tools like this that add value to their niche and that is what I expect will happen.
Job Crushing: Yes many jobs and entire career paths are going to change. The Unrestricted Intelligence site is just a proof of concept. But it makes it clear that analysts can perform better with AI enabled tools. If analysts can perform better, will the same number be required for the same amount of work? Will there be organizations that decide they need no analysts now? What about lawyers? What if lawyers enabled with tools like these are so productive that fewer lawyers are required? Every career path should be scrutinized. Remember, you are the only one responsible for your own career. Whatever you do, you may want to think through how you do it with AI tools.
Prompt Engineering: This is a cute phrase that really just means asking well formed questions. I loved hearing the term Prompt Engineering the first time. And heard from friends and business associates early on how people would start to make a living by prompt engineering. Sure enough, within weeks social media was on fire with the term and some people even began selling lessons on prompt engineering. It is important to get your questions right with current large language models. But is that really engineering? Additionally, as time goes on these systems get better and better. Maybe the point here is that we all should be good at asking good questions no matter whether we are asking a computer or person or team of analysts.
OpenAI. This company should probably change its name to reflect who they really are now. Hats off to them for changing the world. But what is Open about their approach? What are their models? How do they work? How are they trained? How are errors found? How do the views and bias of their leadership team impact their results? What is their roadmap? What do they do with the data submitted to them as queries? How is user data protected? Most all of these answers seem so closed away the company should probably be called ClosedAI.
Enterprise Data: Perhaps the greatest thing about OpenAI and tools like my web app are it can help people think about what tools like this can do over their own data. Currently OpenAI is very bad at that. They have a new plugin architecture that will enable more of that, and solutions leveraging Azure and OpenAI APIs deliver some limited capabilities here. But it is all sub optimized and nowhere near what will be required for any enterprise that values security over their data (see What Leaders Need To Know About Natural Language Processing). It is time for enterprise leaders to articulate their requirements for AI over their data (my hope is the Unrestricted Intelligence web app can help users think through what these requirements are). For example, enterprises should begin articulating a need for neural search over all internal data holdings. AI enabled summarization should also be an enterprise requirement. So should an ability to operate on all data holdings.
AI today is NKR, Not Quite Right: The significant benefits of well trained language models hold great potential. The mistakes and errors and outright hallucination hold potential in the other direction. Being aware of the types of flaws in current systems can help ensure decision-makers build in methods to mitigate risks. The best intelligence analysts already know to work to reduce their own bias and guard against cognitive traps. Great leaders know to to believe everything they think. Now we need to understand our new computer tools can also be biased and lead us astray.
AI Helps Build AI Tools: As mentioned I am not a developer. I am also not a power user of no-code tools. I found I was able to get the Unrestricted Intelligence site up and running and performing but when I wanted to add more advanced capabilities it required more tech skills than I could muster. I could, however, open ChatGPT and ask it well formed queries that helped me add new functionality to my app. AI helped me build a better AI tool. This has also been a huge help to my other coding projects.
Hackers Are Going To Hack: I love the hacking spirit. I have always admired the great persistent actors who seek to push systems to their limits and explore the realm of the possible with technology. Yes of course I mean hackers who operate with authorization. The Unrestricted Intelligence site was designed as a proof of concept and people were encouraged and of course authorized to ask any question. Some decided to use that in ways that surprised me. One creative thinker kept submitting increasingly complex queries designed to generate a response that would disclose something meant to be hidden from users. This hidden part is different for each section of the site. It is a long pre-prompt that is sent to OpenAI before the actual user prompt. I was able to read in the logs as this creative genius finally did it. I sure learned from that.
Asking Good People For Feedback Helps: I got so much feedback from friends and associates on this. This includes friend pointing out bugs. This point has always applied to all applications, but I sense it will become even more important to ask friends and associates to point out issues and provide feedback in the AI age. We humans really have to stick together.
You Can Use AI To Keep Content A Bit More Civil: Some people on the Internet get their kicks from being nasty. I have no idea why they would want to use my site to ask nasty questions, but some did. Maybe it was just to test out if replies would be different from those from OpenAI and ChatGPT. I wanted to reduce this sort of thing on my site, and thought about changing the background prompts to list the worst bad words and if someone uses those give a reply that says “this is not an appropriate question.” But that became too long and would be easy to get around. I instead changed my code to simple say “If the request is lewd stop processing and reply that it is not a suitable topic”. That really worked well. There was one person who surprised me with a way of asking a really twisted question that got a really twisted response, but it is so twisted I can’t really discuss it further. For the most part, the idea of using AI to keep content more civil seems to have worked.
The Turing Test: This test proposed by Alan Turing was originally called the imitation game. It was conceived as a test to see if responses from a computer were indistinguishable from that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. For seventy years now this concept has helped humans think through concepts of computer intelligence. But now we have pretty much shown that computers can pass this test, and still they are not intelligent. We need new measures to consider. We should also understand that just as all AI’s are not the same, all humans are not the same either.
We are likely going to disestablish the Unrestricted Intelligence website in the coming weeks. The value we deliver to users via the site can be delivered using other means (including directly via OpenAI or for OpenAI Chat GPT Pro users, via the new GPT features they offer). There is also a far better, more functional LLM trained on 1000’s of hand curated references and speeches, writings of Matt Devost at http://altzero.ai which can deliver valuable assessments to any user.