PURPOSE-BUILT
AI PROCESSORS FOR THE CLOUD
Habana processors are designed to deliver cost-efficient and easy-to-implement AI training of workloads and models in the cloud, making AI more accessible to more end customers via cloud service providers.
Efficiency
Habana® Gaudi® processors offer proven, price/performance advantage to cloud service providers, enabling them to deliver cost-efficient training options to their end customers. With Gaudi’s software and hardware architected for training efficiency, end-users can train larger data sets and retrain models more frequently, increasing deep learning workload accuracy and applicability.
Usability
Habana is committed to making it easy for cloud service providers and their end-customers to develop and deploy Gaudi-based training instances and models with relative ease and support. Our SynapseAI® software platform integrates TensorFlow and PyTorch frameworks, and supports popular computer vision, natural language processing and recommendation models. And, we provide data scientists and developers with documentation, “how to” guides and videos, community and Habana support forums, and tools on the Habana Developer Site and on the Habana GitHub.
Scalability
Ten 100-Gigabit ports of RDMA over Converged Ethernet (RoCE) is integrated into every Gaudi processor, giving cloud providers flexible and expandable networking capacity, enabling optimization of networking within the server node with all-to-all processor connectivity. And, integrated RoCE on every Gaudi gives CSPs the cost-effective options of scaling out across nodes and racks, with versatile AI compute capacity based on affordable, industry-standard Ethernet technology.
Learn More About Habana
AI Processors in the Cloud
