AMD disclosed a few more details on the MI300 GPU, due later this year, with support for 192GB of memory on the MI300X. Here’s what we know. In today’s world of ChatGPT, everyone keeps asking if the NVIDIA A100 and H100 GPUs are the only platforms that can deliver the computational and large memory requirements of Large Language Models (LLMs). And the answer is yes, at least for now. But AMD intends to change that later this year with a new GPU, the MI300x. CEO Lisa Su was visibly excited to announce a few more details on her company’s upcoming data center GPU at a data center event today. Dr. Su announced a version of the CPU/GPU APU MI300, teased at CES earlier this year, that replaces 3 EPYC compute dies with two more GPU dies, adding more compute and HBM memory capacity. The MI300X will be the flagship AMD offering for large AI, so this is a very big deal for the company and its investors. It will be available in single accelerators as well as on an 8-GPU OCP-compliant board, called the Instinct Platform, similar to the NVIDIA HGX. However it will use the Infinity Fabric to connect the GPUs, and will run the ROCm AI software stack.
Full story : Is The New AMD MI300X Better Than The NVIDIA H100?