Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > This AI Model Never Stops Learning

This AI Model Never Stops Learning

Modern large language models (LLMs) might write beautiful sonnets and elegant code, but they lack even a rudimentary ability to learn from experience. Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information. The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences. The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM generate its own synthetic training data based on the input it receives. “The initial idea was to explore if tokens [units of text fed to LLMs and generated by them] could cause a powerful update to a model,” says Jyothish Pari, a PhD student at MIT involved with developing SEAL. Pari says the idea was to see if a model’s output could be used to train it. Adam Zweiger, an MIT undergraduate researcher involved with building SEAL, adds that although newer models can “reason” their way to better solutions by performing more complex inference, the model itself does not benefit from this reasoning over the long term.

Full research : MIT researchers unveil Self-Adapting LLMs, a framework that enables LLMs to keep improving by generating their own training data based on the input received.