Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > How Liquid AI Is Challenging Transformer-Based AI Models

How Liquid AI Is Challenging Transformer-Based AI Models

Despite their relatively impressive capabilities, most conventional deep learning AI models suffer from a number of limitations — such as not being able to recall previously learned knowledge after learning a new task (catastrophic forgetting) and the inability to adapt to new information (loss of plasticity). Liquid neural networks (LNNs) are a relatively recent development that may address some of these limitations, thanks to a dynamic architecture, along with adaptive and continual learning capabilities. Introduced back in 2020 by a team of researchers from MIT, liquid neural networks are a type of time-continuous recurrent neural network (RNN) that can process sequential data efficiently. In contrast to conventional neural networks that are generally only trained once on a fixed dataset, LNNs can also adapt to new inputs while still retaining knowledge from previously learned tasks — thus helping to avoid problems like catastrophic forgetting and loss of plasticity. The ‘liquid’ nature of LNNs derives from its implementation of a liquid time constant (LTC), which allows the network to adapt to new information by dynamically altering the strength of the network’s connections while remaining robust to noise. Notably, however, the weights of an LNN’s nodes are bounded — meaning that LNNs are not vulnerable to issues like gradient explosion, which can cause the model to become unstable.

Full report : A startup called Liquid AI claims that Liquid Foundational Models (LFMs) are superior to the transformer-based models made famous by ChatGPT.