Researchers at Harvard and MIT have achieved a breakthrough in continuous quantum computing, operating a 3,000-qubit neutral-atom system for over two hours without loss of coherence — a feat that marks the transition from pulsed, fragile experiments to persistent, reconfigurable quantum processors.
Continuous operation eliminates one of the field’s most stubborn bottlenecks: atom loss.
The architecture provides a blueprint for scalable, error-corrected quantum systems, with implications for quantum networking, precision metrology, and atomic clocks.
As highlighted by the Harvard Gazette in Clearing significant hurdle to quantum computing, this achievement “pushes quantum computing from theory toward continuous operation in real-world conditions.”
Summary
The Harvard–MIT team, led by Mikhail Lukin and Vladan Vuletić, engineered a dual optical-lattice conveyor system capable of continuously refreshing a 3,000-atom quantum array while maintaining coherence.
The system replenishes atomic qubits faster than they decohere or are lost — creating the first steady-state, coherence-preserving quantum platform.
By reloading 30,000 qubits per second, the team maintained a functional 3,000-qubit array for over two hours, vastly exceeding previous lifetimes (~60 seconds).
Shielding mechanisms and in situ dynamical decoupling preserved coherence across zones even during atom reloading and imaging processes.
This continuous operation paradigm may soon underpin quantum fault-tolerant logic, metrological clocks, and quantum network nodes.
Continuous operation of a coherent 3,000-qubit system: a, A cloud of laser-cooled atoms is transported over 0.5 m from a separate MOT region into the science region via two optical lattice conveyor belts crossed at an angle. In the science region, the optical lattice serves as an atomic reservoir, from which a two-dimensional array of optical tweezers repeatedly extracts atoms into the ‘preparation zone’. Here, atoms are laser-cooled, rearranged into a defect-free array and their qubit state initialized, then transferred into a large-scale storage tweezer array (‘storage zone’). Our dual-lattice scheme avoids direct line of sight between the tweezer arrays and the MOT location, and enables fully concurrent preparation and replenishment of the atomic reservoir. Inset: relevant atomic levels of 87Rb, where F denotes the hyperfine level and mF the magnetic sublevel. During qubit preparation, storage qubits are protected from near-resonant photon scattering with the 5S1/2 → 5P3/2 transition by light-shifting the excited state (‘shielding’). Single-qubit gates are implemented via optical Raman transitions that drive clock states and (Methods). b, Cumulative number of atoms obtained by N-repeated tweezer extractions from a single lattice reservoir (see top-left schematic), where we observe a decline in tweezer filling fraction after about 70 repeated extractions owing to reservoir depletion (see also Extended Data Fig. 4). For reference, the grey line indicates 50% array filling. Inset: histogram of tweezer filling fractions for the first 30 extractions from the reservoir. Notably, no laser cooling is applied during the tweezer loading process. c, Cumulative number of atoms and qubits obtained by tweezer extraction from repeatedly replaced lattice reservoirs. The grey markers indicate an atom flux of about 300,000 atoms per second after light-assisted collisions, where the brief interruptions originate from the second transport stage of reservoir replacement during which no reservoir is present. Performing the qubit preparation sequence after each extraction, we achieve a continuous qubit flux of 15,000 qubits per second with rearrangement (R; orange) and 30,000 qubits per second without rearrangement (green). Error bars represent the standard error of the mean across 10 repetitions.
Key Points
Continuous operation benefits deep-circuit quantum algorithms, atomic clocks, and quantum networking, where persistent coherence is critical.
Continuous Qubit Reloading: Optical tweezers reload up to 30,000 qubits per second while preserving coherence — two orders of magnitude above prior benchmarks.
Architecture Design: Two optical conveyor belts feed a reservoir of rubidium atoms into a “science region,” where qubits are extracted, cooled, initialized, and stored without disrupting nearby qubits.
Extended Coherence: Quantum coherence persisted during simultaneous qubit replacement and measurement through light shielding and dynamical decoupling.
Scalability: Future improvements — faster initialization, larger preparation zones, and metasurface optics — could yield tens of thousands of continuously operated qubits.
Toward Fault-Tolerance: The team envisions combining this architecture with Rydberg-mediated entangling gates to create error-corrected, reconfigurable quantum processors.
Expanded Arrays: AI-optimized control and FPGA-based rearrangement could accelerate reloading rates fivefold, enabling hundreds of logical qubits with near-zero logical error rates (10⁻⁸).
Quantum Metrology and Networking: The architecture’s stability and continuity will enhance entanglement-based sensing and quantum internet nodes, reducing noise in precision systems.
Industrial Implications: Continuous atomic control could extend to quantum clocks, GPS timing, and secure communications systems, bridging quantum physics and operational technology.
Recommendations Based on this Quantum Breakthrough
Monitor Neutral-Atom Developments: Investors and policymakers should prioritize neutral-atom architectures as credible contenders alongside superconducting and photonic qubits.
Support Hardware–Software Integration: Encourage AI-based optimization tools for real-time qubit alignment, calibration, and control.
Leverage Dual-Use Potential: National labs and defense agencies should explore continuous quantum architectures for secure timing, sensor fusion, and resilient communications.
Coordinate Standards: Cross-institutional standards for continuous-operation qubit systems will accelerate commercialization and interoperability.
The Reality of Quantum Innovation – Examining What is Real and What is Hype at OODAcon: Many in the quantum community consider Richard Feynman the father of the quantum computing revolution. He was the first to formally articulate the idea that simulating quantum systems efficiently would require a computer built on quantum mechanical principles. In a 1981 lecture “Simulating Physics with Computers,” he explained that classical computers struggle to simulate quantum phenomena due to exponential complexity and suggested that quantum computers, machines operating under quantum laws, could solve this inefficiency.
Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.