Deep
Learning
Products

Learn more

Performance and efficiency at every scale.

With the surging demand for the advantages deep learning and Generative AI can bring, there’s never been greater need for improved compute performance, efficiency, usability and choice. Intel Gaudi AI accelerators and Intel Gaudi software are designed to bring a new level of compute advantages and choice to data center training and inference—whether in the cloud or on-premises. Our aim is to make the benefits of AI deep learning more accessible to more enterprises and organizations, removing barriers to bring deep learning advantages to many.
Performance icon

Performance

Cost Efficiency Icon

Cost efficiency

Scalability icon

Scalability

Ease of use icon

Ease of use

Intel Gaudi AI accelerator badge

To bring AI to many, we offer high-efficiency, deep learning-optimized processors and software:

The Intel Gaudi AI accelerator platform was conceived and architected to address training and inference demands of large-scale era AI, providing enterprises and organizations with high-performance, high-efficiency deep learning compute.

Intel Gaudi AI accelerator
First-generation deep learning training and inference
for competitive price/performance

Learn More

Intel Gaudi 2 AI accelerator
Second-generation deep learning training and
inference for leadership performance

Learn More

Intel Gaudi 3 AI accelerator
We are soon bringing another leap in performance
and efficiency with our third-generation
Intel Gaudi AI accelerator

Learn More

Intel Gaudi Software
Software and developer tools and resources for ease
of use, ease of migration, and ease of deployment

Learn More
Efficient Performance
Massive & Flexible Scalability
Ease of model migration & build
Performance icon
Efficient Performance
  • Architected expressly for DL compute
  • Heterogeneous compute architecture
  • AI-optimized matrix multiplication engine
  • Custom Tensor Processor cores
  • Large on-board memories
  • Networking integrated on chip
Scale icon
Massive & Flexible Scalability
  • On-chip integration of industry-standard RoCE
  • Massive capacity with integration of 10 or 24 100 GbE ports
  • All-to-all configuration within the server
  • Flexible scale-out to support numerous configurations
  • Industry-standard networking lowers cost
  • Avoids vendor lock-in
Build icon
Ease of model migration & build
  • Software optimized for Deep Learning training & inference
  • Integrates popular frameworks: TensorFlow and PyTorch
  • Provides custom graph compiler
  • Supports custom kernel development
  • Enables ecosystem of software partners
  • Habana GitHub & Community Forum