Start your day with intelligence. Get The OODA Daily Pulse.
Conspicuous, controversial and notorious is the Chinese unwillingness to participate in global benchmarking list of the world’s most powerful supercomputers. So, the “top 500” list goes out – with the U.S. on top. The lack of participation by the Chinese then fuels rumors that they in fact possess capabilities that transcend all the participants in the global benchmarking lists. We take a comparative look at some supercomputer specs to see what the compute reality really is vis a vis Chinese supercomputing capabilities as reported by a few reputable outlets. While on the topics, we took a look at the Russians efforts and the latest on Telsa’s Dojo as well.
As of May 2022, the U.S. had the Top Spot in Supercomputer Race:
“A massive machine in Tennessee has been deemed the world’s speediest. Experts say two supercomputers in China may be faster, but the country didn’t participate in the rankings.”
The United States has regained a coveted speed crown in computing with a powerful new supercomputer in Tennessee, a milestone for the technology that plays a major role in science, medicine and other fields.
Frontier, the name of the massive machine at Oak Ridge National Laboratory, was declared on Monday to be the first to demonstrate performance of one quintillion operations per second — a billion billion calculations — in a set of standard tests used by researchers to rank supercomputers. The U.S. Department of Energy several years ago pledged $1.8 billion to build three systems with that “exascale” performance, as scientists call it.
But the crown has a caveat. Some experts believe that Frontier has been beaten in the exascale race by two systems in China. Operators of those systems have not submitted test results for evaluation by scientists who oversee the so-called Top500 ranking. Experts said they suspected that tensions between the United States and China may be the reason the Chinese have not submitted the test results.
“There are rumors China has something,” said Jack Dongarra, a distinguished professor of computer science at the University of Tennessee who helps lead the Top500 effort. “There is nothing official.”
“Frontier remains No. 1 in the TOP500 but Aurora with Intel’s Sapphire Rapids chips enters with a half-scale system at No. 2.”
The 62nd edition of the TOP500 reveals that the Frontier system retains its top spot and is still the only exascale machine on the list. However, five new or upgraded systems have shaken up the Top 10.
Housed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, Frontier leads the pack with an HPL score of 1.194 EFlop/s – unchanged from the June 2023 list. Frontier utilizes AMD EPYC 64C 2GHz processors and is based on the latest HPE Cray EX235a architecture. The system has a total of 8,699,904 combined CPU and GPU cores. Additionally, Frontier has an impressive power efficiency rating of 52.59 GFlops/watt and relies on HPE’s Slingshot 11 network for data transfer.
The new Aurora system at the Argonne Leadership Computing Facility in Illinois, USA, entered the list at the No. 2 spot – previously held by Fugaku – with an HPL score of 585.34 PFlop/s. That said, it is important to note that Aurora’s numbers were submitted with a measurement on half of the planned final system. Aurora is currently being commissioned and will reportedly exceed Frontier with a peak performance of 2 EFlop/s when finished.
Aurora is built by Intel and is based on the HPE Cray EX – Intel Exascale Compute Blade, which uses Intel Xeon CPU Max Series processors and Intel Data Center GPU Max Series accelerators. These communicate through HPE’s Slingshot-11 network interconnect.
In the entire list, 20 new systems now use Intel Sapphire Rapids CPUs. Bringing the total number of systems using this CPU to 25, the Intel Sapphire Rapids CPU is now leading the new CPU among new systems. However, of the 45 new systems on the list only four use the corresponding Intel GPU, with Aurora being the largest by far.
For more on the list, go to this link.
For the complete TOP500 Ranking, see The Top500 Ranking Press Release.
The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. A second version of the list was compiled in November 1993 for the SC93 conference. Comparing both editions to see how things had changed the authors realized how valuable this information was and continued to compile statistics about the market for HPC systems based on it. The TOP500 is now a much-anticipated, much-watched and much-debated twice-yearly event.
tom’sHARDWARE is the source of December 2023 and January 2024 reports, respectively, with details of China’s supercomputer efforts:
“China breaks the ExaFlop barrier with homegrown chips, again.”
“China has made significant strides in supercomputing in recent years, and the pace seems to be accelerating. The Tianhe-2, developed by China’s National University of Defense Technology, was the world’s fastest supercomputer from 2013 to 2015, but it used Intel’s Xeon CPUs. Now, the country has announced a new supercomputer that is said to offer twice the performance of its predecessor.
China’s supercomputer efforts remain shrouded behind a veil of secrecy, but a recent announcement on newsgd claims the new Tianhe Xingyi system has “doubled in many aspects, including general CPU computing power, network capabilities, storage capabilities, and application service capabilities.” This new system matches the general description of the Tianhe-3 supercomputer that was originally planned for release in 2019, touting 1.7 ExaFlops of peak performance from a dual-chip FeiTeng Arm and Matrix accelerator node architecture.
Since Chinese universities can no longer legally obtain American high-performance hardware, they have to use domestic processors, which are apparently becoming quite powerful. The Next Platform reports that the new Tianhe Xingyi can achieve exascale performance using domestic chips called MT3000, but the specifications and performance numbers remain a mystery. In fact, the report does not mention any accelerators, only the processor itself.”
“It’s unclear how fast it is, in part because we don’t know what hardware it’s running.”
“China Telecom claims it has built the country’s first supercomputer constructed entirely with Chinese-made components and technology (via ITHome). Based in Wuhan, the Central Intelligent Computing Center supercomputer is reportedly built for AI and can train large language models (LLM) with trillions of parameters. Although China has built supercomputers with domestic hardware and software before, going entirely domestic is a new milestone for the country’s tech industry.
It’s hard to guess what might be inside this supercomputer, given the lack of details. On the CPU side of things, it may use Zhaoxin’s KaiSheng KH-40000 server CPUs, which are now available in domestically-made servers. There are also other candidates though, like Loongson’s 32-core 3D5000 and Phytium’s 64-core Feiteng Tengyun S2500. All three chips differ greatly in respect to architecture, with Zhaoxin using x86 like Intel and AMD. Loongson uses a derivative of MIPS, and Phytium runs Arm’s architecture.
Similarly, there are plenty of options for Chinese-made GPUs, with possibilities ranging from Moore Threads, Loongson, and Biren. Of the three companies, Moore Threads is the most recent to launch a new GPU in the form of its MTT S4000, which is already planned to see use in the KUAE Intelligent Computing Center. Loongson’s LG200 arrived about two weeks before the S4000, though its claimed performance would make for a very slow supercomputer. Biren’s BR100 would be a heavyweight champion, but it’s unclear if it ever returned to production anywhere after TSMC stopped making it due to U.S. sanctions.
Regardless of the actual hardware inside China Telecom’s new supercomputer, that it is reportedly made from top to bottom with Chinese hardware is the most important part. Relying solely on Chinese technology likely means the Central Intelligent Computing Center is disadvantaged in some or many areas. But technological independence is a key goal for China, even if it means swapping out cutting-edge Western hardware for slower but natively-made components. U.S. sanctions won’t have much of an impact if China can manage to do everything itself.”
And just a few days ago, techradarpro reported:
“Tianhe-3 – ‘Xingyi’ – is the latest in a series of supercomputers built by China’s National University of Defense Technology.”
“China has quietly launched the Tianhe-3 supercomputer, which is believed to be the most powerful machine currently in existence. The machine, built for the National Supercomputer Center in Guangzhou, has been shrouded in secrecy (as you would expect from a supercomputer developed and built in China), sparking plenty of speculation. The Tianhe-3, also known as “Xingyi,” is thought to be a significant leap forward in supercomputing technology, potentially surpassing the capabilities of the upcoming “El Capitan” supercomputer being developed by Hewlett Packard Enterprise and AMD for the Lawrence Livermore National Laboratory.
In November, 2023, TheNextPlatform ran an analysis of the Top500 supercomputer rankings which suggested that the Tianhe-3 could have a peak performance of 2.05 exaflops and a sustained performance of 1.57 exaflops on High Performance LINPACK. This, the site said, would make it the “most powerful machine yet assembled on Earth”…The Tianhe-3 is the latest in a series of supercomputers built by the National University of Defense Technology in China. Its predecessors, the Tianhe-1 and Tianhe-2, also made significant impacts on the supercomputing world, with the Tianhe-2 still ranking among the top 30 supercomputers even after several years of operation.
One of the most intriguing aspects of the Tianhe-3 is its processor. A recent case study on programming the Matrix-3000 (MT-3000) accelerators, submitted to arXiv, provided some insight into the machine’s architecture. Delving into this, TheNextPlatform concluded that the Tianhe-3 uses a hybrid device with CPU and accelerator compute as well as three different kinds of memory, two of which are located in the compute complex.”
tom’sHARDWARE seems to be the source for best in class information on the future of compute. RE: Russia:
“Russia builds MSU-270 supercomputer for AI and HPC research.”
“Lomonosov Moscow State University (MSU) has launched its new supercomputer named MSU-270 with a peak computational power of 400 ‘AI’ PetaFLOPS. This machine will be used for various artificial intelligence (AI) and high-performance computing (HPC) applications and for training large AI models. The MSU-270 is based on the ‘latest graphics accelerators,’ though MSU decided not to mention where they come from.
Lomonosov Moscow State University (MSU) has launched its new supercomputer named MSU-270 with a peak computational power of 400 ‘AI’ PetaFLOPS. This machine will be used for various artificial intelligence (AI) and high-performance computing (HPC) applications and for training large AI models. The MSU-270 is based on the ‘latest graphics accelerators,’ though MSU decided not to mention where they come from.
While 400 PetaFLOPS is a formidable performance, it should be noted that these are the so-called ‘AI’ PetaFLOPS, which possibly means FP16 data format. Russia’s highest-performing supercomputer has an Rmax performance of around 21.5 FP64 PetaFLOPS and an Rpeak performance of approximately 29.5 FP 64 PetaFLOPs. Unfortunately, MSU does not disclose the FP64 performance of its MSU-270 machine (though it will be orders of magnitude lower than 400 PetaFLOPS), but it could probably be the country’s fastest machine.”
“Plan hinges on ability to obtain the accelerator hardware.”
“Russia has an ambitious plan to build up to ten supercomputers by 2030, each potentially housing 10,000 to 15,000 Nvidia H100 GPUs. From a computing perspective, this would provide the nation with performance on a scale similar to that which was used to train Chat GPT. Formidable in general, a system featuring so many H100 GPUs could produce some 450 FP64 TFLOPS, which is half of an ExaFLOP, a level of supercomputer performance that has only been achieved by the U.S., so far. Spearheaded by the ‘Trusted Infrastructure’ team, the Russian project promises to push the boundaries of computational capabilities, with each machine potentially boasting between 10,000 to 15,000 Nvidia H100 GPUs. However, the desired compute AI and HPC GPUs would come from Nvidia, an American company.
The war Russia started in Ukraine had led the U.S. to restrict tech exports to Russia. creating a gaping hole in the procurement strategy for processors like Nvidia GPUs. Thus, the question that hangs in the air is: how could Russia bypass these restrictions since it would seem impossible to smuggle thousands of valuable AI and HPC GPUs?…Currently, Russia has just seven supercomputers that rank in the global top 500. For comparison: [based in October 2023 rankings] the USA has 150 machines on the list, China has 134, Germany has 36, and Japan has 33. As of June 2023, Russia was 12th in the rankings.”
“…The Dojo system is a Tesla-designed supercomputer made to train the machine learning models behind the EV maker’s self-driving systems. The computer takes in data captured by vehicles and processes it rapidly to improve the company’s algorithms. Analysts have said Dojo could be a key competitive advantage, and earlier this year Morgan Stanley estimated it could add $500 billion to Tesla’s value. Musk has said the carmaker plans to invest more than $1 billion on Project Dojo by the end of 2024. The Tesla leader first shared plans for the supercomputer in 2019 before formally announcing it in 2021.
Dojo is powered by a custom D1 chip designed by [Tesla’s Dojo supercomputer project lead Ganesh Venkataramanan, who recently left the company], [Peter] Bannon and a slew of other big names from the silicon industry. Venkataramanan previously worked at Advanced Micro Devices Inc., while Tesla has several other veterans from the chip designer on staff.
In recent weeks, Tesla also installed hardware for Dojo at a centralized location in Palo Alto, California, two of the people said. Dojo has relied on multiple data centers in different locations.
Tesla previously relied on supercomputers from Nvidia Corp. to power its AI-based systems, while Dojo would compete with offerings from Hewlett Packard Enterprise Co. and IBM. In July, Tesla said it started production of the Dojo supercomputer system. It’s being manufactured by Taiwan Semiconductor Manufacturing Company Ltd., the same builder of chips that Apple uses.”
The WSJ captured the potential of it all when describing the scale and potential capabilities of the No.2 Supercomputer on the Top500 list – and the coming convergence of super-computation and AI:
“Called Aurora, the supercomputer’s high-performance capabilities will be matched with the latest advances in artificial intelligence. Together they will be used by scientists researching cancer, nuclear fusion, vaccines, climate change, encryption, cosmology and other complex sciences and technologies. Housed at the Energy Department’s Argonne National Laboratory, Aurora is among a new breed of machines known as “exascale” supercomputers. In a single second, an exascale computer can perform one quintillion operations—a billion billion, or a one followed by 18 zeros. Aurora is:
And the geopolitical “What Next” of it all? A Center for Security and Emerging Technologies (CSET) report translation of the November 2023 Chinese “Action Plan for the High-Quality Development of Computing Power Infrastructure” capture the strategic implications best:
“Computing power (“compute”) is a new productive force (新型生产力) that integrates information computing power, network carrying capacity (网络运载力), and data storage capacity. It mainly provides services to society through compute infrastructure. Compute infrastructure is an important part of the new information infrastructure. It is diverse, ubiquitous, intelligent, agile, secure, reliable, green, and low-carbon. It is of great significance for promoting industrial transformation and upgrading, empowering scientific and technological (S&T) innovation and progress, meeting the needs of the people for the good life (美好生活), and achieving high-efficiency governance of society. This action plan has been formulated to strengthen collaborative innovation in computing, networks, storage, and applications, promote the high-quality development of compute infrastructure, and give full play to the role of compute as a driver of the digital economy.”
For previous News Briefs and Original Analysis, see: OODA Loop | Computational Power OODA Loop | Computation OODA Loop | Compute
Computer Chip Supply Chain Vulnerabilities: Chip shortages have already disrupted various industries. The geopolitical aspect of the chip supply chain necessitates comprehensive strategic planning and risk mitigation. See: Chip Stratigame
Technology Convergence and Market Disruption: Rapid advancements in technology are changing market dynamics and user expectations. See: Disruptive and Exponential Technologies.
The New Tech Trinity: Artificial Intelligence, BioTech, Quantum Tech: Will make monumental shifts in the world. This new Tech Trinity will redefine our economy, both threaten and fortify our national security, and revolutionize our intelligence community. None of us are ready for this. This convergence requires a deepened commitment to foresight and preparation and planning on a level that is not occurring anywhere. The New Tech Trinity.
AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.
Benefits of Automation and New Technology: Automation, AI, robotics, and Robotic Process Automation are improving business efficiency. New sensors, especially quantum ones, are revolutionizing sectors like healthcare and national security. Advanced WiFi, cellular, and space-based communication technologies are enhancing distributed work capabilities. See: Advanced Automation and New Technologies
Emerging NLP Approaches: While Big Data remains vital, there’s a growing need for efficient small data analysis, especially with potential chip shortages. Cost reductions in training AI models offer promising prospects for business disruptions. Breakthroughs in unsupervised learning could be especially transformative. See: What Leaders Should Know About NLP