Start your day with intelligence. Get The OODA Daily Pulse.
On April 5, IBM unveiled IBM z16, the company’s next-generation system with an integrated on-chip artificial intelligence (AI) accelerator to deliver latency-optimized inferencing. With this innovation, clients will be able to analyze real-time transactions at scale. IBM z16 is even more valuable for mission-critical workloads such as credit card, health care and financial transactions. Inference is a process of running live data points through a machine learning (ML) model to calculate a specific output. For example, in financial transactions, the output might be a numerical score that could detect fraud. While highly valuable, inference often isn’t fast enough to run all transactions at scale without damaging service levels. IBM z16 now aims to make those limits obsolete. With z16, IBM has embedded an AI accelerator on the IBM Telum processor. Now, banks can analyze for fraud during transactions on a massive scale. IBM z16 can process 300 billion inference requests per day with just one millisecond of latency. For consumers, z16 may reduce the frustration of handling fraudulent transactions on their credit cards as claims can be processed faster. For merchants and card issuers, low latency inference means less revenue loss. With improved inference, consumer churn is less likely due to fewer false charge declines and improved customer service.
Full story : IBM Develops AI-Powered z16 to Help Thwart Quantum Cyber Attacks.