Gaudi Logo white

FIRST GENERATION GAUDI:
DEEP LEARNING TRAINING AND INFERENCE PROCESSOR

UP TO 40% BETTER DEEP LEARNING
PRICE PERFORMANCE IN THE CLOUD
AND ON-PREMISES
Habana Card

Habana® Gaudi® Deep Learning Training and Inference Processor

With our first-generation Gaudi deep learning processor, Habana provides customers with the cost-effective, high-performance training and inference alternative to existing GPUs. This is the deep learning architecture that enables AWS to deliver the best price/performance training instance in its AI portfolio—the Gaudi-based DL1—beating comparable GPU-based instances by up to 40% for price performance. And, it enables Supermicro to provide customers with the X12 Gaudi Training Server, also at a 40% price performance advantage over GPU-based servers.

AWS logo

GAUDI® In the Cloud

Get started with Gaudi-based
Amazon EC2 DL1 Instances
aws-dl1

GAUDI® In the Data Center

Supermicro server
Build Gaudi processors into your data center with Supermicro

MASSIVE & FLEXIBLE SYSTEM
SCALING WITH GAUDI

Every first-generation Gaudi processor integrates ten 100 Gigabit Ethernet ports of RDMA over Converged Ethernet (RoCE2) on chip to deliver unmatched scalability, enabling customers to efficiently scale AI training and inference from one processor to 1000s to nimbly address expansive compute requirements of today’s deep learning workloads.
Get details in this video >

Scaling options for first-gen Gaudi -based systems

scaling icon

For customers who want to build out on-premises systems,  we recommend the Supermicro X12 Gaudi Training Server,  which is already configured with eight Gaudi processors and provides the option of incremental advanced AI storage with the DDN AI400X2 Storage Appliance.

For customers who wish to configure their own Gaudi-based servers, we provide reference model options, the Gaudi HLS-1 and HLS-1H.  For more information on these server options, please see more details >

Gaudi Usability

Making developing on Gaudi Fast and Easy
Synapse®AI Software Suite

Optimized, for deep learning model development and to ease migration of existing GPU-based models to Gaudi platform hardware. It integrates TensorFlow and PyTorch frameworks and a rapidly increasing array of computer vision, natural language processing and multi-modal models. Getting started with model migration is as easy as adding 2 lines of code, and for expert users who wish to program their own kernels, Habana offers the full tool-kit and libraries to do that as well. SynapseAI software supports training models on first-gen Gaudi and Gaudi2 and inferencing them on any target, including Intel®Xeon® processors, Habana® Greco™ or inferencing on Gaudi2 itself.

SynapseAI
PyTorch logo
TensorFlow logo
Diagram
Developer-1

Habana Developer Site

Developers are supported with documentation and tools, how-to content, tutorials and updated on training opportunities–from webinars to hands-on trainings–to make using Habana processors and software platform as fast and easy as possible.
github-1

Habana GitHub

Our GitHub is the go-to hub for developers to access our Gaudi and Gaudi2-integrated reference models, plan ahead with our open model roadmap and file issues and bugs with our Habana team and the GitHub community.
Forum ICON

Habana Community Forum

Our Developer Site houses the Habana Community Forum, another channel where developers can access relevant information and topics, log opinions and engage with industry peers.

Software Ecosystem

SynapseAI is also integrated with ecosystem partners such as Hugging Face with transformer model repositories and tools, Grid.ai Pytorch Lightning and CNVRG.IO MLOPS software.
habana ecosystem partners