NVIDIA has launched a third-generation artificial intelligence processing chip designed to train large AI datasets and handle various data center workloads including analytics and inference.
NVIDIA said Thursday the software-defined DGX A100 chip has a 5-petaflop capacity and includes the company’s A100 Tensor Core graphics processing units to support 320 GB of memory as well as interconnects reaching speeds of up to 200 Gbps.
Earlier in May, the company delivered the first DGX A100 cluster to the Department of Energy’s Argonne National Laboratory in Illinois to support supercomputer-driven COVID-19 research and development efforts.
In line with the A100 chip’s launch, NVIDIA also unveiled a DGX SuperPOD with the capacity for 140 A100 systems generating 700 petaflops of computing power to support research into various subject areas including genomics, autonomous driving and conversational AI.
“NVIDIA DGX is the first AI system built for the end-to-end machine learning workflow — from data analytics to training to inference,†said Jensen Huang, founder and CEO of NVIDIA.
Government agencies, companies and service providers have already placed orders for DGX A100, according to NVIDIA. The University of Florida is slated to become the first higher-learning institution to use the technology for AI-infused academic activities.