Agents come in many forms, many of which respond to prompts humans issue through text or speech. Yet as organizations figure out how generative AI fits into their plans, IT leaders would do well to pay close attention to one emerging category: multiagent systems. In such systems, multiple agents execute tasks intended to achieve an overarching goal, such as automating payroll, HR processes, and even software development, based on text, images, audio, and video from large language models (LLMs). Eighty-two percent of leaders surveyed by Capgemini said they expect to integrate them into their businesses to help automate such tasks as generating anything from emails to software code to analyzing data within the next one to three years. How multiagents operate depends on the tasks and goals they’re designed to accomplish. It might help to think of multiagent systems as conductors operating a train. You’ll have a lead conductor—a “boss” if you will—who doles out tasks to a series of other conductors, or subagents. A human user might query the lead conductor through a classic user interface, such as an LLM prompt window, thus setting off a chain of events as each subagent handles a different task. The agents may collaborate with each other, other digital tools, systems, and even humans, tapping into corporate repositories to gain additional organizational knowledge. Importantly, these systems learn from their task history, human feedback, and other inputs to regularly improve their performance as well as adapt to changes in their environment.
Full report : AI agents loom large as organizations pursue generative AI value.