Start your day with intelligence. Get The OODA Daily Pulse.

I built UnrestrictedIntelligence.com as a proof of concept. I wanted to learn and also wanted to enable more from the OODA network to experiment with some capabilities of the large language models which can be accessed via connections to OpenAI. 

The core functionality of the site was established while on a flight.  By using code that writes code (a “no-code” capability called Bubble) and learning how to interface with the famous GPT models of OpenAI using their methods, I was able to build a first version and have it online in a matter of hours (it was announced it here in early January.) Since then more functionality was added. It can now demonstrate capabilities of large language model use in domains including:

  • Intelligence Analysis
  • Competitive Intelligence Assessments
  • Cyber Threat Intelligence
  • Corporate Board Goverance
  • Academic Professors/Tutors

There have now been more than 10,000 questions asked via the site, each resulting in AI generated responses.

The site has also been the source of many lessons for me personally. Here are a few of my lessons learned:

Cambrian Explosion: A Cambrian Explosion in new capabilities is coming. I am not a developer, but by using no-code tools and learning to send and get results via computer interfaces to an AI provider (OpenAI via their API), I was able to take my domain knowledge and create a tool of use to myself and my community.  There is no barrier to entry here (if I can do this, anyone can). Millions can build micro-sites or personalized AI tools like this that add value to their niche and that is what I expect will happen.

Job Crushing: Yes many jobs and entire career paths are going to change. The Unrestricted Intelligence site is just a proof of concept. But it makes it clear that analysts can perform better with AI enabled tools. If analysts can perform better, will the same number be required for the same amount of work? Will there be organizations that decide they need no analysts now? What about lawyers? What if lawyers enabled with tools like these are so productive that fewer lawyers are required? Every career path should be scrutinized. Remember, you are the only one responsible for your own career. Whatever you do, you may want to think through how you do it with AI tools.

Prompt Engineering: This is a cute phrase that really just means asking well formed questions. I loved hearing the term Prompt Engineering the first time. And heard from friends and business associates early on how people would start to make a living by prompt engineering. Sure enough, within weeks social media was on fire with the term and some people even began selling lessons on prompt engineering.  It is important to get your questions right with current large language models. But is that really engineering? Additionally, as time goes on these systems get better and better. Maybe the point here is that we all should be good at asking good questions no matter whether we are asking a computer or person or team of analysts. 

OpenAI. This company should probably change its name to reflect who they really are now. Hats off to them for changing the world. But what is Open about their approach? What are their models? How do they work? How are they trained? How are errors found? How do the views and bias of their leadership team impact their results? What is their roadmap? What do they do with the data submitted to them as queries? How is user data protected? Most all of these answers seem so closed away the company should probably be called ClosedAI. 

Enterprise Data: Perhaps the greatest thing about OpenAI and tools like my web app are it can help people think about what tools like this can do over their own data. Currently OpenAI is very bad at that. They have a new plugin architecture that will enable more of that, and solutions leveraging Azure and OpenAI APIs deliver some limited capabilities here. But it is all sub optimized and nowhere near what will be required for any enterprise that values security over their data (see What Leaders Need To Know About Natural Language Processing). It is time for enterprise leaders to articulate their requirements for AI over their data (my hope is the Unrestricted Intelligence web app can help users think through what these requirements are). For example, enterprises should begin articulating a need for neural search over all internal data holdings. AI enabled summarization should also be an enterprise requirement. So should an ability to operate on all data holdings.  

AI today is NKR, Not Quite Right: The significant benefits of well trained language models hold great potential. The mistakes and errors and outright hallucination hold potential in the other direction. Being aware of the types of flaws in current systems can help ensure decision-makers build in methods to mitigate risks. The best intelligence analysts already know to work to reduce their own bias and guard against cognitive traps. Great leaders know to to believe everything they think. Now we need to understand our new computer tools can also be biased and lead us astray. 

AI Helps Build AI Tools: As mentioned I am not a developer. I am also not a power user of no-code tools. I found I was able to get the Unrestricted Intelligence site up and running and performing but when I wanted to add more advanced capabilities it required more tech skills than I could muster.  I could, however, open ChatGPT and ask it well formed queries that helped me add new functionality to my app. AI helped me build a better AI tool.  This has also been a huge help to my other coding projects. 

Hackers Are Going To Hack: I love the hacking spirit. I have always admired the great persistent actors who seek to push systems to their limits and explore the realm of the possible with technology. Yes of course I mean hackers who operate with authorization. The Unrestricted Intelligence site was designed as a proof of concept and people were encouraged and of course authorized to ask any question. Some decided to use that in ways that surprised me. One creative thinker kept submitting increasingly complex queries designed to generate a response that would disclose something meant to be hidden from users. This hidden part is different for each section of the site. It is a long pre-prompt that is sent to OpenAI before the actual user prompt.  I was able to read in the logs as this creative genius finally did it. I sure learned from that.  

Asking Good People For Feedback Helps: I got so much feedback from friends and associates on this. This includes friend pointing out bugs. This point has always applied to all applications, but I sense it will become even more important to ask friends and associates to point out issues and provide feedback in the AI age. We humans really have to stick together. 

You Can Use AI To Keep Content A Bit More Civil: Some people on the Internet get their kicks from being nasty. I have no idea why they would want to use my site to ask nasty questions, but some did. Maybe it was just to test out if replies would be different from those from OpenAI and ChatGPT. I wanted to reduce this sort of thing on my site, and thought about changing the background prompts to list the worst bad words and if someone uses those give a reply that says “this is not an appropriate question.”  But that became too long and would be easy to get around. I instead changed my code to simple say “If the request is lewd stop processing and reply that it is not a suitable topic”. That really worked well. There was one person who surprised me with a way of asking a really twisted question that got a really twisted response, but it is so twisted I can’t really discuss it further. For the most part, the idea of using AI to keep content more civil seems to have worked.   

The Turing Test: This test proposed by Alan Turing was originally called the imitation game. It was conceived as a test to see if responses from a computer were indistinguishable from that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. For seventy years now this concept has helped humans think through concepts of computer intelligence. But now we have pretty much shown that computers can pass this test, and still they are not intelligent. We need new measures to consider. We should also understand that just as all AI’s are not the same, all humans are not the same either.

 

These and many other lessons will inform our continued examination of Artificial Intelligence. Please continue to provide your insights, and also please continue to test our UnrestrictedIntelligence.com

Bob Gourley

About the Author

Bob Gourley

Bob Gourley is an experienced Chief Technology Officer (CTO), Board Qualified Technical Executive (QTE), author and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. CTO of OODA LLC, a unique team of international experts which provide board advisory and cybersecurity consulting services. OODA publishes OODALoop.com. Bob has been an advisor to dozens of successful high tech startups and has conducted enterprise cybersecurity assessments for businesses in multiple sectors of the economy. He was a career Naval Intelligence Officer and is the former CTO of the Defense Intelligence Agency.