Habana Blog

News & Discussion
Tagged: developer

Faster Training and Inference: Habana Gaudi®-2 vs Nvidia A100 80GB

In this article, you will learn how to use Habana® Gaudi®2 to accelerate model training and inference, and train bigger models with 🤗 Optimum Habana.
developer, Gaudi2, Hugging Face

The Habana team is happy to announce the release of SynapseAI® Software version 1.8.0

We have upgraded versions of several libraries with SynapseAI 1.8.0, including PyTorch 1.13.1, PyTorch Lightning 1.8.6 and TensorFlow 2.11.0 & 2.8.4.
developer, synapseai

Pre-Training the BERT 1.5B model with DeepSpeed

In this post, we show you how to run Habana’s DeepSpeed enabled BERT1.5B model from our Model-References repository.
BERT, DeepSpeed, developer, Gaudi, Gaudi2, pytorch, synapseai

Road sign detection using Transfer Learning with TensorFlow EfficientDet-D0

In this paper we’ll show how Transfer Learning is an efficient way to train an existing model on a new and unique dataset with equivalent accuracy and significantly less training time.
developer, EfficientDet, Transfer Learning

Large Model usage with minGPT

This tutorial provides example training scripts to demonstrate different DeepSpeed optimization technologies on HPU. This tutorial will focus on the memory optimization technologies, including Zero Redundancy Optimizer(ZeRO) and Activation Checkpointing.
developer

Training Causal Language Models on SDSC’s Gaudi-based Voyager Supercomputing Cluster

The SDSC Voyager supercomputer is an innovative AI system designed specifically for science and engineering research at scale.
developer

Art Generation with PyTorch Stable Diffusion and Habana Gaudi

In this post, we will learn how to run PyTorch stable diffusion inference on Habana Gaudi processor, expressly designed for the purpose of efficiently accelerating AI Deep Learning models.
developer, Gaudi, pytorch

Detecting frequent graph re-compilations

In training workloads, there may occur some scenarios in which graph re-compilations occur. This can create system latency and slow down the overall training process with multiple iterations of graph compilation. This blog focuses on detecting these graph re-compilations.
debugging, developer, Gaudi, performance, pytorch

Writing training scripts that can run either on Gaudi, GPU, or CPU

In this tutorial we will learn how to write code that automatically detects what type of AI accelerator is installed on the machine (Gaudi, GPU or CPU), and make the needed changes to run the code smoothly.
developer, Gaudi

The Habana team is happy to announce the release of SynapseAI® version 1.7.0.  

We have upgrade versions of several libraries with SynapseAI 1.7.0, including DeepSpeed 0.7.0, PyTorch Lightning 1.7.7, TensorFlow 2.10.0 & 2.8.3, horovod 0.25.0, libfabric 1.16.1, EKS 1.23, and Open Shift 4.11.
developer, synapseai

Fine tuning GPT2 with Hugging Face and Habana Gaudi

In this tutorial, we will demonstrate fine tuning a GPT2 model on Habana Gaudi AI processors using Hugging Face optimum-habana library with DeepSpeed.
DeepSpeed, developer, Fine Tuning, Gaudi, GPT, GPT2, Hugging Face

Memory-Efficient Training on Habana® Gaudi® with DeepSpeed

One of the key challenges in Large Language Model (LLM) training is reducing the memory requirements needed for training without sacrificing compute/communication efficiency and model accuracy.
DeepSpeed, developer, Gaudi, Large Language Models