Short description
This codemod converts existing PyTorch code that follows the standard naming conventions to use HuggingFace Accelerate 0.26.1.
Detailed description
This codemod converts existing PyTorch code that follows the standard naming conventions to use HuggingFace Accelerate 0.26.1 so that ML code can easily run in a distributed manner. The changes include the additional import
statements and some changes when device
is concerned.
https://huggingface.co/docs/accelerate/v0.26.1/en/basic_tutorials/migration
Examples
Before
import torchdevice = "cuda"model.to(device)for batch in training_dataloader:optimizer.zero_grad()inputs, targets = batchinputs = inputs.to(device)targets = targets.to(device)outputs = model(inputs)loss = loss_function(outputs, targets)loss.backward()optimizer.step()scheduler.step()
After
import torchimport acceleratordevice = accelerator.devicemodel, optimizer, training_dataloader, scheduler = accelerator.prepare(model, optimizer, training_dataloader, scheduler)for batch in training_dataloader:optimizer.zero_grad()inputs, targets = batchoutputs = model(inputs)loss = loss_function(outputs, targets)accelerator.backward(loss)optimizer.step()scheduler.step()
Build custom codemods
Use AI-powered codemod studio and automate undifferentiated tasks for yourself, colleagues or the community