Start your day with intelligence. Get The OODA Daily Pulse.
A hint to the future arrived quietly over the weekend. For a long time, I’ve been discussing two parallel revolutions in AI: the rise of autonomous agents and the emergence of powerful Reasoners since OpenAI’s o1 was launched. These two threads have finally converged into something really impressive – AI systems that can conduct research with the depth and nuance of human experts, but at machine speed. OpenAI’s Deep Research demonstrates this convergence and gives us a sense of what the future might be. But to understand why this matters, we need to start with the building blocks: Reasoners and agents. For the past couple years, whenever you used a chatbot, it worked in a simple way: you typed something in, and it immediately started responding word by word (or more technically, token by token). The AI could only “think” while producing these tokens, so researchers developed tricks to improve its reasoning – like telling it to “think step by step before answering.” This approach, called chain-of-thought prompting, markedly improved AI performance. Reasoners essentially automate the process, producing “thinking tokens” before actually giving you an answer. This was a breakthrough in at least two important ways.