The platform built from inception for deep learning training and inference workloads in the data center.
Architected for high-efficiency deep learning compute, scalability, and usability
Gaudi technology exists to bring a new level of efficiency to
data center model training– whether in the cloud or on-premises.
According to the IDC Semiannual AI tracker 2020, 56% of AI machine learning practitioners surveyed report that cost-to-train is one of the leading obstacles to their organization reaping the advantages of AI. Habana’s Gaudi technology was architected from the ground up to address this problem, making AI more accessible to more customers– so they can derive business insights and efficiencies and deliver enhanced user experiences that AI can provide.
In support of this mission, we now offer two Gaudi deep learning processor options.
- Architected expressly for AI compute
- Heterogeneous architecture
- AI-dedicated matrix multiplication engine
- Large on-board memories
- Networking integrated on chip
MASSIVE & FLEXIBLE
- On-chip integration of RDMA over Converged Ethernet (RoCE2)
- Avoids scale-up bottlenecks
- Supports flexible scale-out
- Industry-standard networking gives customers choice & lowers build-out cost
EASE OF MODEL BUILD
- Supported by SynapseAI Software Suite optimized for Gaudi & deep learning workloads
- Integrated TensorFlow & PyTorch frameworks, 36 computer vision & NLP models
- Habana Developer Site & GitHub
- Habana Community Forum