Start your day with intelligence. Get The OODA Daily Pulse.

I believe I have found a useful analogy that can help explain the new approach of Thermodynamic Computing: Surfing. The surfer may surf for pleasure or competition, but whatever their purpose, they become one with the ocean, harnessing its raw power to surf. They do not bring gasoline engines or other power sources. They take advantage of the thermodynamic noise of the ocean and ride for free. 

All matter and energy in the universe, like our oceans, obeys the laws of thermodynamics, which dictate how energy and heat behave in any system. These principles have inspired a new approach to computing—Thermodynamic Computing—where the natural fluctuations and noise of energy are harnessed to perform computations more efficiently.

Thermodynamic Computing is an exciting new architecture that holds the potential of dramatically improving our ability to leverage the most advanced AI models while reducing energy consumption by orders of magnitude. This post reviews the science and some of the engineering around this approach. 

Background

Today’s computer architecture is digital. Whether it is a computer in your pocket or on your desktop or a server in the cloud, you are using a transistor based system. If you use a Computer Processor Unit (CPU), a Graphical Processing Unit (GPU) or a Text Processing Unit (TPU), you are using a device made up of digital switches and digital memory. 

New quantum-based architectures are also being developed which leverage the ability to engineer systems that operate on quantum mechanical properties (see our Executive’s Guide To Quantum Effects and Computing) These go beyond simple digital systems since they can exploit quantum effects like superposition, entanglement, and quantum tunneling. 

Thermodynamic Computing is a third approach. It is not based on transistors or qubits. It is based on the ability to capture and use energy from the continuous motion and of all matter.

At a molecular level, all matter is always in motion. Even when cooled to the theoretical limits of cold, matter still vibrates. Warmer matter vibrates more. With vibrations of charged particles come disruptions of the electromagnetic field and noise that spreads beyond just collisions of molecules. 

Designers of digital computers and researchers building quantum systems fight this reality of continuous noise every day. Architects building new thermodynamic computing approaches embrace the noise. The thermodynamic reality of matter and its inherent noise is like a noisy ocean full of waves and constant movement. This reality of the physical world can be used the way a surfer harnesses the noisy waves of the sea. A surfer does not fight the wave, a surfer uses the wave. The goal of Thermodynamic Computing is to smartly channel the constant noise of matter into useful computing power. 

I was first exposed to the potential of Thermodynamic Computing by following the work of Guillaume Verdon, a physicist and quantum computing researcher with an impressive list of both published papers and real-world quantum computing experience. He devised and published approached for quantum neural network-based models, participated in the creation of open-source software to enable rapid prototyping of AI models in quantum systems, and was part of the team building Google’s quantum computing systems. 

Verdon presented an overview of his approach to Thermodynamic Computing at OODAcon. Watch his full presentation here. 

According to Guillaume, he grew tired of fighting the noise in quantum computing and decided to switch sides. He would join the noise and use it for a new computing approach. He founded the company Extropic to execute on this approach. Extropic has a team of other deeply experienced researchers with extensive expertise in both quantum and digital systems to build an approach to leverage the heat and noise, the thermodynamics, to enable compute.

Here is a more on why this approach is needed now. 

Digital systems are being pushed to their limit. The most advanced AI models, the deep neural networks that are enabling progress across every industry and every realm of science and are critical to the future of humanity, are expensive to build and run. And they draw incredible amounts of power. Pacing factors for any seeking to leverage new AI approaches are the cost of massive digital systems, the cost of energy to power them, and the inability to build enough infrastructure to meet energy demands. The world needs more AI and is being limited by the realities of resource limitations. 

We really face a choice. We can see the future of AI as a Malthusian problem where we must gain control of resources from others to carve out incremental improvements for ourselves, hoping our businesses and governments can use AI to achieve our goals while competing with others. Or we can find new approaches that reduce the resource limitations. Thermodynamic Computing holds the greatest potential I have seen to do just that. 

The Extropic approach to Thermodynamic Computing is use the noise of matter to reduce the energy demands for the most advanced neural network models. Before describing more of their approach, it is important to understand some key aspects of a type of AI called neural networks. 

Advanced neural network models are inspired by the way the human brain works. The models operate as if they are complex arrangements of neurons that pass information to and from one another to make decisions or predictions. Some neurons are designed for inputs and some for outputs. And there are many layers of neurons in between which hold information that can change by complex algorithms depending on how they are trained. Each connection between these neurons as a “weight” which determines how much influence one has on another. When data is passed through the network, the neurons process the information by applying these weights and a mathematical function before moving data to the next neuron. These networks learn through adjusting weights. The weights are adjusted through a process of training. During training, the network is giving input data and the correct output (for example, images and then descriptions of images). The training requires the neural network to make smart predictions, compare them to the known answers, then use a method called backpropagation to adjust the weights to make better predictions the next time. 

Neural networks handle complex data and make predictions in challenging domains. While simple data, such as a bell curve (Gaussian distribution), is easy to visualize, more intricate data—like complex images, financial markets, weather patterns, or weapons simulations—doesn’t follow such straightforward patterns. These data types are non-Gaussian, multi-dimensional, and require advanced techniques for training and prediction.

During training and then operation of neural networks, noise is required. In current neural networks based on digital systems, that noise must be created, at a very high energy cost, to help randomize initial weights for neurons. This is critical since starting with all weights the same will make the network learn too slow or not at all. Randomness breaks symmetry and allows different neurons to learn different things. During training, randomness is also required for random sampling of data. Randomness is also used to improve models by changing neuron flow. In current digital systems, all this randomness is generated by their own highly complex algorithms requiring extensive energy and systems designed to dissipate heat, which also requires extensive energy. This need for energy for training is why big players in AI like OpenAI, Google, Microsoft, Amazon and X, are all continuously seeking out ways to meet projected future energy demands and all are concerned with the walls they face. 

Here is how Extropic is changing the hardware that enables neural network computing, based on their published LitePaper on their approach.

Extropic designed and built a super cooled chip that uses a well-known quantum engineering approach called a Josephson junction. This is designed to enable electrons to move between circuits in ways traditional computers cannot, without resistance or losing energy (this is a quantum effect called quantum tunneling). The electron pairs move in a nonlinear, unpredictable timing, which enables a source of randomness required for non-Gaussian probability distributions and models of neural networking enabled by other components on the Extropic chip. 

The Josephson junction component is critical to the chip but represents only one part of its overall design. Other features of the chip are tailored to support a method known as Energy-Based Models (EBMs). For those familiar with AI, EBMs are likely a well-known concept. This model type earned Geoffrey Hinton and John Hopfield the recognition of a Nobel Prize for their groundbreaking work in EBMs, which has transformed our understanding of AI and fueled the current AI revolution. EBMs are essential in ensuring neural networks are properly trained, as they describe the relationships between variables using an energy or cost function.

Extropic designed their chip to enable EBM to define probabilistic distributions of complex non-Gaussian data more naturally and with less energy than digital systems. This is done on their chip in a way that leverages the natural fluctuations of matter as a computational resource. EMBs are implemented directly as analog circuits. Randomness is built into the analog circuits since it is available from the noise of the matter itself. Since the circuits are analog vice digital, they can naturally model continuous changes in energy and probability, making them much better suited for tasks like probability sampling compared to the binary logic of digital systems. 

Extropic describes this part of their chip as “parameterized stochastic analog circuits.” 

  • Parameterized means the circuit has adjustable parameters, allowing its behavior to be fine-tuned for different tasks (Extropic is building a software layer to enable these adjustments). 
  • Stochastic means the circuit involves some level of randomness or probabilistic behaviors, intentionally not deterministic. This comes from the intentional use of the noise of matter. 
  • Analog means that unlike digital circuits that are working with discrete values (like 0s and 1s), continuous signals are processed. 

In essence, these circuits are analog in nature and also incorporate randomness and tunable parameters, which makes them useful in systems requiring probabilistic behavior or for modeling real-world uncertain processes.

Extropic’s superconducting chips are very energy-efficient because they only use energy when being used or adjusted. This makes them some of the most energy-efficient processors available. By using stochastic analog hardware purpose designed for modern AI models, Extropic can accelerate improvements in speed and energy efficiency that could improve systems by thousands to millions of times faster than traditional systems for many tasks, especially those involving sampling from complex distributions to make predictions.  

Just as the surfer harnesses the natural energy of the ocean waves to achieve graceful movement without expending unnecessary force, Thermodynamic Computing aims to utilize the inherent noise and fluctuations of matter to power advanced computations. Rather than resisting these forces, as traditional digital and quantum systems often do, this new architecture embraces them, turning a natural source of randomness into an asset. In doing so, it promises a more energy-efficient future for AI and neural network models, much like the surfer’s effortless glide along the wave. Thermodynamic Computing has the potential to transform our approach to computational power, reducing the barriers imposed by energy limitations and paving the way for advancements that align more closely with the fundamental laws of the universe.

For Additional Insights See

Join The OODA Network For Deeper Insights and Peer-To-Peer Dialog. Subscribers receive: 

  • Exclusive Content Access: Research and expert driven analysis to inform your decision-making.  Over ten thousand articles on disruptive technologies, cybersecurity, geo-political risk, and national security technology issues available only to subscribers.  Our Daily Global Pulse will let you know what premium content has been recently published as well as hand-curate the top stories of the day with executive level summaries.
  • The OODA Network Dispatch: Our weekly newsletter keeps you apprised of emerging trends and upcoming events so you can stay informed and aware of issues that could impact you or your organization.
  • Community Engagement: Engage in our dynamic Slack Workspace which serves as a hub for professionals and experts to exchange ideas, strategies, insights, and opportunities.

Monthly

Subscribe to OODA

$30

per month

  • Premium Content
  • Slack Community
  • Weekly Newsletter
  • OODAcon Invite

Most Popular

Annual

Subscribe to OODA Loop

$300

per year

  • Premium Content
  • Slack Community
  • Weekly Newsletter
  • 10% OODAcon Discount

Member

Apply to Join the OODA Network

$895

per year

  • All Subscriber Benefits
  • Monthly Meetings
  • In-person Network Events
  • Network Slack Channels
  • 50% OODAcon Discount

Bob Gourley

About the Author

Bob Gourley

Bob Gourley is an experienced Chief Technology Officer (CTO), Board Qualified Technical Executive (QTE), author and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. CTO of OODA LLC, a unique team of international experts which provide board advisory and cybersecurity consulting services. OODA publishes OODALoop.com. Bob has been an advisor to dozens of successful high tech startups and has conducted enterprise cybersecurity assessments for businesses in multiple sectors of the economy. He was a career Naval Intelligence Officer and is the former CTO of the Defense Intelligence Agency.