Start your day with intelligence. Get The OODA Daily Pulse.

Brain-inspired, Light-enabled, Circuit-fueled: Neuromorphic Computing Innovation, Intel’s Chip Platform and Open-Source Developer Ecosystem

Innovation in “semiconductor computational capability, resources and size, weight, and power consumption (SWaP)” is the centerpiece of the five-year, $1.5 billion DARPA Electronics Resurgence Initiative and the $52 billion 2021 CHIPS Act.

DARPA has a track record of success with semiconductor innovation in collaboration with academia and the private sector, having seeded the field of neuromorphic computing from 2008-2014 to the tune of $52 million in a collaboration with the Department of Energy, various universities and IBM Almaden Research. The DARPA-funded Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) Program developed the advanced brain-inspired chip – IBM True North – which was part of the foundational research for IBM Watson and remains the central architecture of IBM’s Brain-inspired Chip research efforts.

As the DARPA announcement of the True North breakthrough cited in 2014: “DARPA-funded researchers have developed one of the world’s largest and most complex computer chips ever produced—one whose architecture is inspired by the neuronal structure of the brain and requires only a fraction of the electrical power of conventional chips.”

Non-von Neumann Architectures

In 2016, Staff Scientist Jeffrey Shainline and his research team at NIST noted that neuromorphic computing has a pedigree in the “many foundational concepts in information theory and computing…developed beginning in the 1930s and 1940s through the work of Turing, von Neumann, Shannon, and others. Given the variety of proposed approaches to computing, it is somewhat surprising that the current landscape of computing technologies exclusively uses the von Neumann architecture.”

In a 2018 announcement, an IBM neuromorphic computing research team out of Zurich went on to explain their seminal breakthrough in a Novel Synaptic Architecture for Brain-Inspired Computing: “When the brilliant scientist John von Neumann built today’s computer architecture, which powers nearly all the world’s computers, he kept the memory and the processing separately. This means data needs to constantly shuttle back and forth, generating heat and requiring a lot of energy – it is an efficiency bottleneck. The brain of course does not have different compartments, which is why it is so efficient. But this did not deter teams from sticking with von Neumann’s design to build a neural network and while they have some success, the efficiency of these systems remains low – you simply can’t beat nature.”

Back in 2016, the NIST researchers had concluded the same: “There has long been an interest in the relationship between information, computation, and cognition. Computing architectures drawing inspiration from biological neural systems have been considered for decades…The recent surge in deep learning and neural networks, marked by advances in hardware, applications, and theory has increased our understanding of the importance of such systems for solving complex problems.”

Forbes magazine, at the time of the 2014 announcement by DARPA and IBM Research, explained the development in an accessible fashion: “Each core of the chip is modeled on a simplified version of the brain’s neural architecture. The core contains 256 “neurons” (processors), 256 “axons” (memory) and 64,000 “synapses” (communications between neurons and axons). This structure is a radical departure from the von Neumann architecture that’s the basis of virtually every computer today…”

Non-silicon Neuromorphic Innovation Ahead

In 2014, neuromorphic computing remained silicon-based, as the DARPA-Sponsored research team explained: “Computers are nowhere near as versatile as our own brains…inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses.

Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortex-like sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real-time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.”

Since 2014, researchers and technology companies have broadened the scope of neuromorphic computing innovation to include developments in:

Optoelectronic Intelligence: There is interesting work being done by Shainline and his team at the NIST. Shainline describes what is called optoelectronic intelligence: “brain-inspired computing that leverages light for communication in conjunction with electronic circuits for computation (in contrast to semiconducting electronics). Shainline and his team note: “…to design and construct hardware for general intelligence, we must consider principles of both neuroscience and very-large-scale integration. For large neural systems capable of general intelligence, the attributes of photonics for communication and electronics for computation are complementary and interdependent. Using light for communication enables high fan-out as well as low-latency signaling across large systems with no traffic-dependent bottlenecks.”

For computation, the inherent nonlinearities, high speed, and low power consumption of Josephson circuits are conducive to complex neural functions. Operation at 4K enables the use of single-photon detectors and silicon light sources, two features that lead to efficiency and economical scalability…a concept for optoelectronic hardware, beginning with synaptic circuits, continuing through wafer-scale integration, and extending to systems interconnected with fiber-optic white matter, potentially at the scale of the human brain and beyond.”

In a separate paper, the NIST team goes on to describe the challenges of their innovation: “Any large-scale neuromorphic system striving for complexity at the level of the human brain and beyond will need to be co-optimized for communication and computation. Such reasoning leads to the proposal for optoelectronic neuromorphic platforms that leverage the complementary properties of optics and electronics. Starting from the conjecture that future large-scale neuromorphic systems will utilize integrated photonics and optics for communication in conjunction with analog electronics for computation, we consider two possible paths towards achieving this vision. The first is a semiconductor platform based on analog CMOS circuits and waveguide-integrated photodiodes. The second is a superconducting approach that utilizes Josephson junctions and waveguide-integrated superconducting single-photon detectors.

Ultimately, both platforms hold potential, but their development will diverge in important respects. Semiconductor systems benefit from a robust fabrication ecosystem and can build on extensive progress made in purely electronic neuromorphic computing but will require III-V light-source integration with electronics at an unprecedented scale, further advances in ultra-low capacitance photodiodes, and success from emerging memory technologies.

In contrast, superconducting systems place near theoretically minimum burdens on light sources (a tremendous boon to one of the most speculative aspects of either platform) or provide new opportunities for integrated, high-endurance synaptic memory. However, superconducting optoelectronic systems will also contend with interfacing low-voltage electronic circuits to semiconductor light sources, the serial biasing of superconducting devices on an unprecedented scale, a less mature fabrication ecosystem, and cryogenic infrastructure.”

In his NIST bio, Jeff Shainline captures the big picture: “In the last half-century, we’ve witnessed transformative changes in society enabled by computing technology. Presently, a new revolution in computation is taking place in which we are rethinking everything from hardware to architecture. For example, the integration of optical components with traditional electronics is occurring rapidly. Photons and electrons offer complementary attributes for information processing. Electronic effects are superior for computation and memory, while light is excellent for communication and I/O. Leveraging the strengths of both becomes crucial for systems with distributed memory and massive connectivity.

My research is at the confluence of integrated photonics and superconducting electronics with the aim of developing superconducting optoelectronic networks. A principal goal is to combine waveguide-integrated few-photon sources with superconducting single-photon detectors and Josephson circuits to enable a new paradigm of large-scale neuromorphic computing. Photonic signaling enables massive connectivity. Superconducting circuitry enables extraordinary efficiency. Computation and memory occur in the superconducting electronic domain, while communication is via light. Thus, the system utilizes the strengths of photons and electrons to enable high-speed, energy-efficient neuromorphic computing at the scale of the human brain.”

DARPA’s Photonics in the Package for Extreme Scalability (PIPES): It seems the DARPA ERI is on the same page as their colleagues over at NIST. In 2019, Mark Rosker was named the new director of the Microsystems Technology Office, which heads up the ERI at DARPA. Rosker spoke to the IEEE Spectrum, describing the expanded programs at ERI since its inception in 2017: “Our first program is one called PIPES, which stands for Photonics in the Package for Extreme Scalability. And what this is about is very high-bandwidth optical signaling for digital interconnects. Photonic interconnects are something that everyone understands, but we’re really talking about driving very high bandwidth photonics all the way down to the package level…this would be useful for achieving sensationally high transfer rates all the way to the package.” Commercialization of such an approach has been evasion to date, as the IEEE reported in 2018: Silicon photonics stumbles at the last meter: We have fiber to the home, but fiber to the processor is still a problem.

Silicon-based Nanotechnology: MIT engineers have designed a “brain-on-a-chip,”:  “..smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain. The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. ‘So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,’ says Jeehwan Kim, associate professor of mechanical engineering at MIT. ‘Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.’” This MIT breakthrough builds on the previously mentioned multi-memristive synaptic architecture approach developed by IBM researchers.

Intel’s Loihi 2 Chip Platform and Lava Open-source Software Framework

Intel recently announced the 2nd generation neuromorphic chip from its Neuromorphic Computing Lab in Oregon. This Intel hardware and software do not represent the commercialization of any of the photonics or nanotechnology innovations we have discussed here. The Intel Loihi 2 chip is not only notable for its groundbreaking technological breakthroughs. Rather, Intel seeks to build a robust neuromorphic computing business ecosystem, which is a complex, dynamic, and adaptive community “of diverse players who create new value through increasingly productive and sophisticated models of both collaboration and competition.” Within this ecosystem the Intel Loihi chip hardware and Lava software platforms will make resources and participants more accessible to each other, defining the protocols and standards that enable a loosely coupled, modular approach to business process design. The platform allows organizations to collaborate with a community of passionate developers, which accelerates the speed and scalability of product development and product release efforts.

The commercial promise of Intel’s Loihi Chip and Lava open-source software framework has antecedents in the ecosystem design of the initial DARPA/IBM Research SyNAPSE Program, which was “a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment. This ecosystem signals a shift in moving computation closer to the data, taking in vastly varied kinds of sensory data, analyzing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.”

The DARPA SyNAPSE ecosystem became the IBM TrueNorth Ecosystem for Brain-Inspired Computing: Scalable Systems, Software, and Applications, “in use at over 30 universities and government / corporate labs…a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers. Looking to the future, IBM is working on integrating multi-sensory neurosynaptic processing into mobile devices constrained by power, volume, and speed; integrating novel event-driven sensors with the chip; real-time multimedia cloud services accelerated by neurosynaptic systems; and neurosynaptic supercomputers by tiling multiple chips on a board, creating systems that would eventually scale to one hundred trillion synapses and beyond.”

IBM True North contributed to deep learning innovation and the promise of convolutional networks for fast, energy-efficient neuromorphic computing. This innovation is now part of the AI Hardware research initiative at IBM, as well as contributing to the IBM Watson Platform and Ecosystem launch in 2015.

Besides this ecosystem and platform go-to-market strategy, according to Intel, Loihi 2 (a second-generation neuromorphic research chip) and Lava (an open-source software framework for developing neuro-inspired applications) are also notable for their innovative breakthroughs and technical benchmarks:

“About Loihi 2: The research chip incorporates learnings from three years of use with the first-generation research chip and leverages progress in Intel’s process technology and asynchronous design methods.

Advances in Loihi 2 allow the architecture to support new classes of neuro-inspired algorithms and applications –  while providing up to 10 times faster processing, up to 15 times greater resource density with up to 1 million neurons per chip, and improved energy efficiency.

Benefitting from a close collaboration with Intel’s Technology Development Group, Loihi 2 has been fabricated with a pre-production version of the Intel 4 process, which underscores the health and progress of Intel 4. The use of extreme ultraviolet (EUV) lithography in Intel 4 has simplified the layout design rules compared to past process technologies. This has made it possible to rapidly develop Loihi 2.

The Lava software framework addresses the need for a common software framework in the neuromorphic research community. As an open, modular, and extensible framework, Lava will allow researchers and application developers to build on each other’s progress and converge on a common set of tools, methods, and libraries. Lava runs seamlessly on heterogeneous architectures across conventional and neuromorphic processors, enabling cross-platform execution and interoperability with a variety of artificial intelligence, neuromorphic and robotics frameworks. Developers can begin building neuromorphic applications without access to specialized neuromorphic hardware and can contribute to the Lava code base, including porting it to run on other platforms.”

Use Case: “‘Investigators at Los Alamos National Laboratory have been using the Loihi neuromorphic platform to investigate the trade-offs between quantum and neuromorphic computing, as well as implementing learning processes on-chip,’ said Dr. Gerd J. Kunde, staff scientist, Los Alamos National Laboratory. ‘This research has shown some exciting equivalences between spiking neural networks and quantum annealing approaches for solving hard optimization problems. We have also demonstrated that the backpropagation algorithm, a foundational building block for training neural networks and previously believed not to be implementable on neuromorphic architectures, can be realized efficiently on Loihi. Our team is excited to continue this research with the second generation Loihi 2 chip.'”

What Next?

Will neuromorphic computational capabilities drive the innovation economy in the U.S. and globally? For Shainline and his research team at NIST, “investigation of novel architectures is only now becoming urgent as we reach the end of Moore’s law scaling.” Conversely, IEEE authors Peter Denning and Ted Lewis, in their IEEE Spectrum article entitled “Exponential Laws of Computing Growth”, present a rigorous quantitative and technical argument that the reports of the death of Moore’s Laws are greatly exaggerated:  Kurzweil’s Law of Accelerating Returns and the exponential technologies framework suggests that there are platform and ecosystem drivers which will propel Moore’s Law and Murphy’s Law full steam ahead well into the next decade.

Finally, the entire debate around the urgency of chip innovation can also be couched as a national security issue, as the DARPA ERI states:

“…the rapid increase in the cost and complexity of advanced microelectronics design and manufacture is challenging a half-century of progress under Moore’s Law, prompting a need for alternative approaches to traditional transistor scaling. Meanwhile, non-market foreign forces are working to shift the electronics innovation engine overseas and cost-driven foundry consolidation has limited Department of Defense (DoD) access to leading-edge electronics, challenging U.S. economic and security advantages. Moreover, highly publicized challenges to the nation’s digital backbone are fostering a new appreciation for electronics security—a longtime defense concern.

Building on the tradition of other successful government-industry partnerships, ERI aims to forge forward-looking collaborations among the commercial electronics community, defense industrial base, university researchers, and the DoD to address these challenges. There is significant historical precedent to suggest the viability of this approach, as each wave of modern electronics development has benefitted from the combination of Defense-funded academic research and commercial sector investment.”

The Intel Loihi 2 chip platform and Lava open-source software framework represent a strategic positioning for Intel at a time when the company needs a big win.  Hardware-level proprietary innovation coupled with network effects, scalability, and ease of collaboration is exactly how Intel would like to strategically scale and win market share over time by way of an open-source ecosystem of chip platform innovation.

For companies and organizations exploring business model transformation or new areas of value creation through innovative technologies, neuromorphic computing represents a nascent space for innovation, complete with opportunities for early-stage exploration and commercialization efforts within your industry vertical  – or in new markets.

For more on Intel’s Loihi 2 and Lava:

Intel Advances Neuromorphic with Loihi 2, New Lava Software Framework

Taking Neuromorphic Computing with Loihi 2 to the Next Level Technology Brief (intel.com)

Brain-inspired chips could soon help power autonomous robots and self-driving cars | Science

Intel’s Neuromorphic Chip Gets A Major Upgrade – IEEE Spectrum

Related Reading:

Black Swans and Gray Rhinos

Now more than ever, organizations need to apply rigorous thought to business risks and opportunities. In doing so it is useful to understand the concepts embodied in the terms Black Swan and Gray Rhino. See: Potential Future Opportunities, Risks and Mitigation Strategies in the Age of Continuous Crisis

Cybersecurity Sensemaking: Strategic intelligence to inform your decisionmaking

The OODA leadership and analysts have decades of experience in understanding and mitigating cybersecurity threats and apply this real world practitioner knowledge in our research and reporting. This page on the site is a repository of the best of our actionable research as well as a news stream of our daily reporting on cybersecurity threats and mitigation measures. See: Cybersecurity Sensemaking

Corporate Sensemaking: Establishing an Intelligent Enterprise

OODA’s leadership and analysts have decades of direct experience helping organizations improve their ability to make sense of their current environment and assess the best courses of action for success going forward. This includes helping establish competitive intelligence and corporate intelligence capabilities. Our special series on the Intelligent Enterprise highlights research and reports that can accelerate any organization along their journey to optimized intelligence. See: Corporate Sensemaking

Artificial Intelligence Sensemaking: Take advantage of this mega trend for competitive advantage

This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process. This includes a special guide for executives seeking to make the most of AI in their enterprise. See: Artificial Intelligence Sensemaking

COVID-19 Sensemaking: What is next for business and governments

From the very beginning of the pandemic we have focused on research on what may come next and what to do about it today. This section of the site captures the best of our reporting plus daily daily intelligence as well as pointers to reputable information from other sites. See: OODA COVID-19 Sensemaking Page.

Space Sensemaking: What does your business need to know now

A dynamic resource for OODA Network members looking for insights into the current and future developments in Space, including a special executive’s guide to space. See: Space Sensemaking

Quantum Computing Sensemaking

OODA is one of the few independent research sources with experience in due diligence on quantum computing and quantum security companies and capabilities. Our practitioner’s lens on insights ensures our research is grounded in reality. See: Quantum Computing Sensemaking.

The OODAcast Video and Podcast Series

In 2020, we launched the OODAcast video and podcast series designed to provide you with insightful analysis and intelligence to inform your decision making process. We do this through a series of expert interviews and topical videos highlighting global technologies such as cybersecurity, AI, quantum computing along with discussions on global risk and opportunity issues. See: The OODAcast

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.