
First Generation
Deep Learning Training & Inference Processor
For leading price performance in the cloud and on-premises

What makes Intel Gaudi AI accelerator
so efficient?

Process technology
Matrix multiplication engine
Tensor processor cores
Onboard HBM2
100G Ethernet ports
Massive and flexible system scaling with Intel Gaudi AI accelerator
Every first-generation Intel Gaudi AI processor integrates ten 100 Gigabit Ethernet ports of RDMA over Converged Ethernet (RoCE2) on chip to deliver unmatched scalability, enabling customers to efficiently scale AI training from one processor to 1000s to nimbly address expansive compute requirements of today’s deep learning workloads.
Get details in this video >
Options for building
Intel Gaudi AI accelerator systems on premises
For customers who want to build out on-premises systems, we recommend the Supermicro X12 Intel Gaudi AI Training Server, which features eight Intel Gaudi AI processors. For customers who wish to configure their own Intel Gaudi AI-based servers, we provide reference model options, the Intel Gaudi AI HLS-1 and HLS-1H.
For more information on these server options, please see more details >
Making developing on Intel Gaudi AI accelerator Fast and Easy: SynapseAI® Software Suite
Optimized for deep learning model development and to ease migration of existing GPU-based models to Intel Gaudi AI platform hardware. It integrates PyTorch and TensorFlow frameworks and supports a rapidly growing array of computer vision, natural language processing and multi-modal models. In fact, over 50K models on Hugging Face are easily enabled on Gaudi with the Habana Optimum software library.
Getting started with model migration is as easy as adding 2 lines of code, and for expert users who wish to program their own kernels, Habana offers the full tool-kit and libraries to do that as well. SynapseAI software supports training and inference of models on first-gen Intel Gaudi AI and Intel Gaudi2 AI .
For more information on the how Habana is making it easy to migrate existing or building new models on Gaudi, see our SynapseAI product page >
