Start your day with intelligence. Get The OODA Daily Pulse.
Eighteen months after restricting employee use of generative artificial intelligence solutions like ChatGPT, JPMorgan Chase CEO Jamie Dimon rolled out an AI assistant of the company’s own making. In an almost full-circle fashion, the solution is built on the technology of ChatGPT’s maker, OpenAI. Dubbed LLM Suite, the banking giant has already released the service to 60,000 employees for tasks like writing reports and crafting emails. This transition from restricting employees’ use of gen AI to building a guardrail-equipped internal solution is becoming a common theme among companies, both big and small, that are trying to leverage the power of the technology. More than a quarter (27%) of organizations have at least temporarily banned public gen AI applications, according to Cisco’s latest Data Privacy Benchmark report, and the majority of businesses have limited what solutions employees can use and how they can use them. Meanwhile, many of those same companies say their employees tend to use restricted applications anyway, according to a recent survey from cybersecurity company Extrahop — making providing an alternative solution with ample safeguards a clear necessity. The main concerns about an employee’s use of gen AI are leaking of sensitive information, hallucinations and industry compliance, according to a recent report from enterprise data management company Veritas. The outputs that AI platforms generate don’t come out of thin air. Regarding the worry about publicizing sensitive information, some large language models store user inputs (the stuff a person types into the chat for a gen AI platform to respond to) and use them to train, or improve, generative AI capabilities. This can put sensitive information about a company or its customers at risk, which is why many organizations opt for bans or restrictions until they can figure out how to manage the technology themselves.