Habana Blog

News & Discussion
Tagged: Blog

Art Generation with PyTorch Stable Diffusion and Habana Gaudi

In this post, we will learn how to run PyTorch stable diffusion inference on Habana Gaudi processor, expressly designed for the purpose of efficiently accelerating AI Deep Learning models.
developer, Gaudi, pytorch

Detecting frequent graph re-compilations

In training workloads, there may occur some scenarios in which graph re-compilations occur. This can create system latency and slow down the overall training process with multiple iterations of graph compilation. This blog focuses on detecting these graph re-compilations.
debugging, developer, Gaudi, performance, pytorch

Writing training scripts that can run either on Gaudi, GPU, or CPU

In this tutorial we will learn how to write code that automatically detects what type of AI accelerator is installed on the machine (Gaudi, GPU or CPU), and make the needed changes to run the code smoothly.
developer, Gaudi

Habana this week at re:Invent ‘22

The Habana® team is excited to be at re:Invent 2022, November 28 – December 1.  ...

The Habana team is happy to announce the release of SynapseAI® version 1.7.0.  

We have upgrade versions of several libraries with SynapseAI 1.7.0, including DeepSpeed 0.7.0, PyTorch Lightning 1.7.7, TensorFlow 2.10.0 & 2.8.3, horovod 0.25.0, libfabric 1.16.1, EKS 1.23, and Open Shift 4.11.
developer, synapseai

Habana this week at Supercomputing ‘22

The Habana® team is excited to be in Dallas at SuperComputing 2022. We look forward ...

Habana Gaudi2 makes another performance leap on MLPerf benchmark

Today MLCommons® published industry results for their AI training v2.1 benchmark that contained an impressive ...

Fine tuning GPT2 with Hugging Face and Habana Gaudi

In this tutorial, we will demonstrate fine tuning a GPT2 model on Habana Gaudi AI processors using Hugging Face optimum-habana library with DeepSpeed.
DeepSpeed, developer, Fine Tuning, Gaudi, GPT, GPT2, Hugging Face

Habana Deep Learning Solutions Support OCP OAM Specification

Gaudi and Gaudi2-based Servers Deliver Flexibility with Industry-standard Interoperability State-of-the-art deep learning applications require multiple ...

Memory-Efficient Training on Habana® Gaudi® with DeepSpeed

One of the key challenges in Large Language Model (LLM) training is reducing the memory requirements needed for training without sacrificing compute/communication efficiency and model accuracy.
DeepSpeed, developer, Gaudi, Large Language Models

Memory-Efficient Training on Habana® Gaudi® with DeepSpeed

One of the key challenges in Large Language Model (LLM) training is reducing the memory ...

Habana Collaborates with Red Hat to Make AI/Deep Learning More Accessible to Enterprise Customers through OpenShift Data Science

AI is transforming enterprises with valuable business insights, increased operational efficiencies and enhanced user experiences ...

Habana at Intel Innovation 2022: Day One

The Habana team is excited to share with you our deep learning technologies and invite ...

Art Generation with PyTorch V-diffusion and Habana Gaudi

In this post, we will learn how to run PyTorch V-diffusion inference on Habana Gaudi ...

Migrating TensorFlow EfficientNet to Habana Gaudi

In this post, we will learn how to migrate a TensorFlow EfficientNet model from running ...
Gaudi

Release of SynapseAI® version 1.6.0

The Habana team is happy to announce the release of SynapseAI® version 1.6.0.  In this ...
Gaudi, Gaudi2

Since Habana’s Last MLPerf submission…

Much has happened at Habana since our last MLPerf submission in November 2021. We launched ...

The Habana® Labs team is happy to announce the release of SynapseAI® version 1.5.0.

SynapseAI 1.5 brings many improvements, both in usability and in Habana ecosystem support. For PyTorch ...
Gaudi, Gaudi2

Habana Gaudi2 at Intel Vision 22

The Habana team is excited to have launched our next-gen 7nm training and inference processors, ...
Gaudi, Gaudi2

Habana Expands Gaudi® Platform Functionality with SynapseAI® version 1.4.1. 

We are excited to announce the release of SynapseAI 1.4.1. This is the first release ...

Train more, spend less with Habana Gaudi-based Amazon EC2 DL1 Instances

As demand for deep learning applications grows among enterprises and startups, so does the need ...

Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training

Powered by deep learning, transformer models deliver state-of-the-art performance on a wide range of machine ...

Habana Expands Gaudi® Platform Functionality with SynapseAI® version 1.4.0.

The Habana(R) Labs team is pleased to augment software support for the Gaudi platform with ...
Gaudi

Habana Labs and Grid.ai are making it easier to train on Gaudi with PyTorch Lightning

The Habana team is pleased to be collaborating with Grid.ai to make it easier and ...