Phorm.ai
OpenAccess-AI-Collective/axolotl

    how to use Docker for multi-node finetuning with deepspeed

    To use Docker for multi-node finetuning with DeepSpeed, you need to follow several steps to ensure that your Docker environment is correctly set up for distributed training across multiple nodes. Here's a step-by-step guide:

    1. Prepare Your Docker Image: Ensure your Docker image includes all necessary dependencies for running your finetuning script, including PyTorch, DeepSpeed, and any other libraries your script requires. You can use the Dockerfile.jinja as a starting point to build your Docker image. Here's a simplified example based on the Dockerfile.jinja:

      FROM pytorch/pytorch:1.8.1-cuda11.1-cudnn8-runtime RUN pip install deepspeed # Copy your training script and any necessary files into the Docker image COPY train.py /workspace/train.py # Add any other dependencies or steps necessary for your project WORKDIR /workspace

      Build your Docker image with:

      docker build -t your-image-name .
    2. Configure DeepSpeed for Multi-Node Training: Create a DeepSpeed configuration file (ds_config.json) that specifies your desired settings for multi-node training. For example:

      { "train_batch_size": 32, "gradient_accumulation_steps": 1, "fp16": { "enabled": true }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true } }
    3. Prepare Your Nodes: Ensure each node has Docker installed and can communicate with each other over a network. It's crucial that the nodes can SSH into each other without a password for DeepSpeed's multi-node capabilities to work. You can achieve this by setting up SSH keys and adding them to ~/.ssh/authorized_keys on each node.

    4. Launch the Docker Containers: On each node, run the Docker container with the necessary environment variables set for DeepSpeed multi-node training. You'll need to specify the master node's IP address (MASTER_ADDR), the master port (MASTER_PORT), the number of nodes (WORLD_SIZE), and the rank of each node (RANK).

      For example, on the master node (rank 0):

      docker run --gpus all --network host -e MASTER_ADDR=master_node_ip -e MASTER_PORT=29500 -e WORLD_SIZE=2 -e RANK=0 your-image-name deepspeed train.py --deepspeed ds_config.json

      And on the worker node (rank 1):

      docker run --gpus all --network host -e MASTER_ADDR=master_node_ip -e MASTER_PORT=29500 -e WORLD_SIZE=2 -e RANK=1 your-image-name deepspeed train.py --deepspeed ds_config.json

      Replace master_node_ip with the IP address of your master node, and adjust WORLD_SIZE to the total number of nodes.

    5. Run Your Training Script: Your training script (train.py) should be set up to initialize DeepSpeed and start the training process. Make sure to pass the --deepspeed and ds_config.json arguments to enable DeepSpeed's optimizations.

    This setup allows you to run distributed finetuning across multiple nodes using Docker and DeepSpeed. Remember to adjust paths, IP addresses, and other configurations according to your specific setup and requirements.

    Search results
    [huggingface/transformers] docs/source/en/deepspeed.md

    Multi-node deployment

    A node is one or more GPUs for running a workload. A more powerful setup is a multi-node setup which can be launched with the deepspeed launcher. For this guide, let's assume there are two nodes with 8 GPUs each. The first node can be accessed ssh hostname1 and the second node with ssh hostname2. Both nodes must be able to communicate with each other locally over ssh without a password.

    By default, DeepSpeed expects your multi-node environment to use a shared storage. If this is not the case and each node can only see the local filesystem, you need to adjust the config file to include a checkpoint to allow loading without access to a shared filesystem:

    { "checkpoint": { "use_node_local_storage": true } }

    You could also use the [Trainer]'s --save_on_each_node argument to automatically add the above checkpoint to your config.

    <hfoptions id="multinode"> <hfoption id="torchrun">

    For torchrun, you have to ssh to each node and run the following command on both of them. The launcher waits until both nodes are synchronized before launching the training.

    python -m torch.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \ --master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
    </hfoption> <hfoption id="deepspeed">

    For the deepspeed launcher, start by creating a hostfile.

    hostname1 slots=8 hostname2 slots=8

    Then you can launch the training with the following command. The deepspeed launcher automatically launches the command on both nodes at once.

    deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \ your_program.py <normal cl args> --deepspeed ds_config.json

    Check out the Resource Configuration (multi-node) guide for more details about configuring multi-node compute resources.

    </hfoption> </hfoptions>
    [huggingface/transformers] docs/source/en/deepspeed.md
    tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True)
    <Tip>

    For more information about initializing large models with ZeRO-3 and accessing the parameters, take a look at the Constructing Massive Models and Gathering Parameters guides.

    </Tip> </hfoption> </hfoptions>

    NVMe configuration

    ZeRO-Infinity allows offloading model states to the CPU and/or NVMe to save even more memory. Smart partitioning and tiling algorithms allow each GPU to send and receive very small amounts of data during offloading such that a modern NVMe can fit an even larger total memory pool than is available to your training process. ZeRO-Infinity requires ZeRO-3.

    Depending on the CPU and/or NVMe memory available, you can offload both the optimizer states and parameters, just one of them, or none. You should also make sure the nvme_path is pointing to an NVMe device, because while it still works with a normal hard drive or solid state drive, it'll be significantly slower. With a modern NVMe, you can expect peak transfer speeds of ~3.5GB/s for read and ~3GB/s for write operations. Lastly, run a benchmark on your training setup to determine the optimal aio configuration.

    The example ZeRO-3/Infinity configuration file below sets most of the parameter values to auto, but you could also manually add these values.

    { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "nvme", "nvme_path": "/local_nvme", "pin_memory": true, "buffer_count": 4, "fast_init": false }, "offload_param": { "device": "nvme", "nvme_path": "/local_nvme", "pin_memory": true, "buffer_count": 5, "buffer_size": 1e8, "max_in_cpu": 1e9 }, "aio": { "block_size": 262144, "queue_depth": 32, "thread_count": 1, "single_submit": false, "overlap_events": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false }

    DeepSpeed features

    There are a number of important parameters to specify in the DeepSpeed configuration file which are briefly described in this section.

    Activation/gradient checkpointing

    Activation and gradient checkpointing trades speed for more GPU memory which allows you to overcome scenarios where your GPU is out of memory or to increase your batch size for better performance. To enable this feature:

    1. For a Hugging Face model, set model.gradient_checkpointing_enable() or --gradient_checkpointing in the [Trainer].
    2. For a non-Hugging Face model, use the DeepSpeed Activation Checkpointing API. You could also replace the Transformers modeling code and replace torch.utils.checkpoint with the DeepSpeed API. This approach is more flexible because you can offload the forward activations to the CPU memory instead of recalculating them.

    Optimizer and scheduler

    DeepSpeed and Transformers optimizer and scheduler can be mixed and matched as long as you don't enable offload_optimizer. When offload_optimizer is enabled, you could use a non-DeepSpeed optimizer (except for LAMB) as long as it has both a CPU and GPU implementation.

    <Tip warning={true}>

    The optimizer and scheduler parameters for the config file can be set from the command line to avoid hard to find errors. For example, if the learning rate is set to a different value in another place you can override it from the command line. Aside from the optimizer and scheduler parameters, you'll need to ensure your [Trainer] command line arguments match the DeepSpeed configuration.

    </Tip> <hfoptions id="opt-sched"> <hfoption id="optimizer">

    DeepSpeed offers several optimizers (Adam, AdamW, OneBitAdam, and LAMB) but you can also import other optimizers from PyTorch. If you don't configure the optimizer in the config, the [Trainer] automatically selects AdamW and either uses the supplied values or the default values for the following parameters from the command line: lr, adam_beta1, adam_beta2, adam_epsilon, weight_decay.

    You can set the parameters to "auto" or manually input your own desired values.

    { "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } } }

    You can also use an unsupported optimizer by adding the following to the top level configuration.

    { "zero_allow_untested_optimizer": true }

    From DeepSpeed==0.8.3 on, if you want to use offload, you'll also need to the following to the top level configuration because offload works best with DeepSpeed's CPU Adam optimizer.

    { "zero_force_ds_cpu_optimizer": false }
    </hfoption> <hfoption id="scheduler">

    DeepSpeed supports the LRRangeTest, OneCycle, WarmupLR and WarmupDecayLR learning rate schedulers.

    Transformers and DeepSpeed provide two of the same schedulers:

    • WarmupLR is the same as --lr_scheduler_type constant_with_warmup in Transformers
    • WarmupDecayLR is the same as --lr_scheduler_type linear in Transformers (this is the default scheduler used in Transformers)

    If you don't configure the scheduler in the config, the [Trainer] automatically selects WarmupDecayLR and either uses the supplied values or the default values for the following parameters from the command line: warmup_min_lr, warmup_max_lr, warmup_num_steps, total_num_steps (automatically calculated during run time if max_steps is not provided).

    You can set the parameters to "auto" or manually input your own desired values.

    { "scheduler": { "type": "WarmupDecayLR", "params": { "total_num_steps": "auto", "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } } }
    </hfoption> </hfoptions>

    Precision

    Deepspeed supports fp32, fp16, and bf16 mixed precision.

    <hfoptions id="precision"> <hfoption id="fp32">

    If your model doesn't work well with mixed precision, for example if it wasn't pretrained in mixed precision, you may encounter overflow or underflow issues which can cause NaN loss. For these cases, you should use full fp32 precision by explicitly disabling the default fp16 mode.

    { "fp16": { "enabled": false } }

    For Ampere GPUs and PyTorch > 1.7, it automatically switches to the more efficient tf32 format for some operations but the results are still in fp32. You can control it from the [Trainer] by setting --tf32 to enable it, and --tf32 0 or --no_tf32 to disable it.

    </hfoption> <hfoption id="fp16">

    To configure PyTorch AMP-like fp16 mixed precision reduces memory usage and accelerates training speed. [Trainer] automatically enables or disables fp16 based on the value of args.fp16_backend, and the rest of the config can be set by you. fp16 is enabled from the command line when the following arguments are passed: --fp16, --fp16_backend amp or --fp16_full_eval.

    { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } }

    For additional DeepSpeed fp16 training options, take a look at the FP16 Training Options reference.

    To configure Apex-like fp16 mixed precision, setup the config as shown below with "auto" or your own values. [Trainer] automatically configure amp based on the values of args.fp16_backend and args.fp16_opt_level. It can also be enabled from the command line when the following arguments are passed: --fp16, --fp16_backend apex or --fp16_opt_level 01.

    { "amp": { "enabled": "auto", "opt_level": "auto" } }
    </hfoption> <hfoption id="bf16">

    To use bf16, you'll need at least DeepSpeed==0.6.0. bf16 has the same dynamic range as fp32 and doesn’t require loss scaling. However, if you use gradient accumulation with bf16, gradients are accumulated in bf16 which may not be desired because this format's low precision can lead to lossy accumulation.

    bf16 can be setup in the config file or enabled from the command line when the following arguments are passed: --bf16 or --bf16_full_eval.

    { "bf16": { "enabled": "auto" } }
    </hfoption> </hfoptions>

    Batch size

    The batch size can be auto-configured or explicitly set. If you choose to use the "auto" option, [Trainer] sets train_micro_batch_size_per_gpu to the value of args.per_device_train_batch_size and train_batch_size to args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps.

    { "train_micro_batch_size_per_gpu": "auto", "train_batch_size": "auto" }

    Gradient accumulation

    Gradient accumulation can be auto-configured or explicitly set. If you choose to use the "auto" option, [Trainer] sets it to the value of args.gradient_accumulation_steps.

    { "gradient_accumulation_steps": "auto" }

    Gradient clipping

    Gradient clipping can be auto-configured or explicitly set. If you choose to use the "auto" option, [Trainer] sets it to the value of args.max_grad_norm.

    { "gradient_clipping": "auto" }

    Communication data type

    For communication collectives like reduction, gathering and scattering operations, a separate data type is used.

    All gather and scatter operations are performed in the same data type the data is in. For example, if you're training with bf16, the data is also gathered in bf16 because gathering is a non-lossy operation.

    Reduce operations are lossy, for example when gradients are averaged across multiple GPUs. When the communication is done in fp16 or bf16, it is more likely to be lossy because adding multiple numbers in low precision isn't exact. This is especially the case with bf16 which has a lower precision than fp16. For this reason, fp16 is the default for reduction operations because the loss is minimal when averaging gradients.

    You can choose the communication data type by setting the communication_data_type parameter in the config file. For example, choosing fp32 adds a small amount of overhead but ensures the reduction operation is accumulated in fp32 and when it is ready, it is downcasted to whichever half-precision dtype you're training in.

    { "communication_data_type": "fp32" }

    Deployment

    DeepSpeed can be deployed by different launchers such as torchrun, the deepspeed launcher, or Accelerate. To deploy, add --deepspeed ds_config.json to the [Trainer] command line. It’s recommended to use DeepSpeed’s add_config_arguments utility to add any necessary command line arguments to your code.

    This guide will show you how to deploy DeepSpeed with the deepspeed launcher for different training setups. You can check out this post for more practical usage examples.

    <hfoptions id="deploy"> <hfoption id="multi-GPU">

    To deploy DeepSpeed on multiple GPUs, add the --num_gpus parameter. If you want to use all available GPUs, you don't need to add --num_gpus. The example below uses 2 GPUs.

    deepspeed --num_gpus=2 examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro
    </hfoption> <hfoption id="single-GPU">

    To deploy DeepSpeed on a single GPU, add the --num_gpus parameter. It isn't necessary to explicitly set this value if you only have 1 GPU because DeepSpeed deploys all GPUs it can see on a given node.

    deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero2.json \ --model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro

    DeepSpeed is still useful with just 1 GPU because you can:

    1. Offload some computations and memory to the CPU to make more GPU resources available to your model to use a larger batch size or fit a very large model that normally won't fit.
    2. Minimize memory fragmentation with it's smart GPU memory management system which also allows you to fit bigger models and data batches.
    <Tip>

    Set the allgather_bucket_size and reduce_bucket_size values to 2e8 in the ZeRO-2 configuration file to get better performance on a single GPU.

    </Tip> </hfoption> </hfoptions>

    Multi-node deployment

    A node is one or more GPUs for running a workload. A more powerful setup is a multi-node setup which can be launched with the deepspeed launcher. For this guide, let's assume there are two nodes with 8 GPUs each. The first node can be accessed ssh hostname1 and the second node with ssh hostname2. Both nodes must be able to communicate with each other locally over ssh without a password.

    By default, DeepSpeed expects your multi-node environment to use a shared storage. If this is not the case and each node can only see the local filesystem, you need to adjust the config file to include a checkpoint to allow loading without access to a shared filesystem:

    { "checkpoint": { "use_node_local_storage": true } }

    You could also use the [Trainer]'s --save_on_each_node argument to automatically add the above checkpoint to your config.

    <hfoptions id="multinode"> <hfoption id="torchrun">

    For torchrun, you have to ssh to each node and run the following command on both of them. The launcher waits until both nodes are synchronized before launching the training.

    python -m torch.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \ --master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
    </hfoption> <hfoption id="deepspeed">

    For the deepspeed launcher, start by creating a hostfile.

    hostname1 slots=8 hostname2 slots=8

    Then you can launch the training with the following command. The deepspeed launcher automatically launches the command on both nodes at once.

    [huggingface/peft] examples/sft/configs/deepspeed_config.yaml
    compute_environment: LOCAL_MACHINE debug: false deepspeed_config: deepspeed_multinode_launcher: standard gradient_accumulation_steps: 4 offload_optimizer_device: none offload_param_device: none zero3_init_flag: true zero3_save_16bit_model: true zero_stage: 3 distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 8 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false
    [huggingface/accelerate] docs/source/basic_tutorials/launch.md

    Multi-node training

    Multi-node training with 🤗Accelerate is similar to multi-node training with torchrun. The simplest way to launch a multi-node training run is to do the following:

    • Copy your codebase and data to all nodes. (or place them on a shared filesystem)
    • Setup your python packages on all nodes.
    • Run accelerate config on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the machine_rank to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with torchrun as well)

    Once you have done this, you can start your multi-node training run by running accelerate launch (or torchrun) on all nodes.

    <Tip> It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command. </Tip> <Tip>

    It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the 192.168.x.x or the 172.x.x.x address you see when you run hostname -I on the main node.

    </Tip>

    To get a better idea about multi-node training, check out our example for multi-node training with FSDP.

    [huggingface/transformers] docs/source/en/deepspeed.md

    Deployment

    DeepSpeed can be deployed by different launchers such as torchrun, the deepspeed launcher, or Accelerate. To deploy, add --deepspeed ds_config.json to the [Trainer] command line. It’s recommended to use DeepSpeed’s add_config_arguments utility to add any necessary command line arguments to your code.

    This guide will show you how to deploy DeepSpeed with the deepspeed launcher for different training setups. You can check out this post for more practical usage examples.

    <hfoptions id="deploy"> <hfoption id="multi-GPU">

    To deploy DeepSpeed on multiple GPUs, add the --num_gpus parameter. If you want to use all available GPUs, you don't need to add --num_gpus. The example below uses 2 GPUs.

    deepspeed --num_gpus=2 examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro
    </hfoption> <hfoption id="single-GPU">

    To deploy DeepSpeed on a single GPU, add the --num_gpus parameter. It isn't necessary to explicitly set this value if you only have 1 GPU because DeepSpeed deploys all GPUs it can see on a given node.

    deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero2.json \ --model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro

    DeepSpeed is still useful with just 1 GPU because you can:

    1. Offload some computations and memory to the CPU to make more GPU resources available to your model to use a larger batch size or fit a very large model that normally won't fit.
    2. Minimize memory fragmentation with it's smart GPU memory management system which also allows you to fit bigger models and data batches.
    <Tip>

    Set the allgather_bucket_size and reduce_bucket_size values to 2e8 in the ZeRO-2 configuration file to get better performance on a single GPU.

    </Tip> </hfoption> </hfoptions>

    Multi-node deployment

    A node is one or more GPUs for running a workload. A more powerful setup is a multi-node setup which can be launched with the deepspeed launcher. For this guide, let's assume there are two nodes with 8 GPUs each. The first node can be accessed ssh hostname1 and the second node with ssh hostname2. Both nodes must be able to communicate with each other locally over ssh without a password.

    By default, DeepSpeed expects your multi-node environment to use a shared storage. If this is not the case and each node can only see the local filesystem, you need to adjust the config file to include a checkpoint to allow loading without access to a shared filesystem:

    { "checkpoint": { "use_node_local_storage": true } }

    You could also use the [Trainer]'s --save_on_each_node argument to automatically add the above checkpoint to your config.

    <hfoptions id="multinode"> <hfoption id="torchrun">

    For torchrun, you have to ssh to each node and run the following command on both of them. The launcher waits until both nodes are synchronized before launching the training.

    python -m torch.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \ --master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
    </hfoption> <hfoption id="deepspeed">

    For the deepspeed launcher, start by creating a hostfile.

    hostname1 slots=8 hostname2 slots=8

    Then you can launch the training with the following command. The deepspeed launcher automatically launches the command on both nodes at once.

    deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \ your_program.py <normal cl args> --deepspeed ds_config.json

    Check out the Resource Configuration (multi-node) guide for more details about configuring multi-node compute resources.

    </hfoption> </hfoptions>

    SLURM

    In a SLURM environment, you'll need to adapt your SLURM script to your specific SLURM environment. An example SLURM script may look like:

    #SBATCH --job-name=test-nodes # name #SBATCH --nodes=2 # nodes #SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node! #SBATCH --cpus-per-task=10 # number of cores per tasks #SBATCH --gres=gpu:8 # number of gpus #SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS) #SBATCH --output=%x-%j.out # output file name export GPUS_PER_NODE=8 export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1) export MASTER_PORT=9901 srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \ --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \ --master_addr $MASTER_ADDR --master_port $MASTER_PORT \ your_program.py <normal cl args> --deepspeed ds_config.json'

    Then you can schedule your multi-node deployment with the following command which launches training simultaneously on all nodes.

    sbatch launch.slurm

    Notebook

    The deepspeed launcher doesn't support deployment from a notebook so you'll need to emulate the distributed environment. However, this only works for 1 GPU. If you want to use more than 1 GPU, you must use a multi-process environment for DeepSpeed to work. This means you have to use the deepspeed launcher which can't be emulated as shown here.

    # DeepSpeed requires a distributed environment even when only one process is used. # This emulates a launcher in the notebook import os os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use os.environ["RANK"] = "0" os.environ["LOCAL_RANK"] = "0" os.environ["WORLD_SIZE"] = "1" # Now proceed as normal, plus pass the DeepSpeed config file training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json") trainer = Trainer(...) trainer.train()

    If you want to create the config file on the fly in the notebook in the current directory, you could have a dedicated cell.

    %%bash cat <<'EOT' > ds_config_zero3.json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } EOT

    If the training script is in a file and not in a notebook cell, you can launch deepspeed normally from the shell in a notebook cell. For example, to launch run_translation.py:

    !git clone https://github.com/huggingface/transformers !cd transformers; deepspeed examples/pytorch/translation/run_translation.py ...

    You could also use %%bash magic and write multi-line code to run the shell program, but you won't be able to view the logs until training is complete. With %%bash magic, you don't need to emulate a distributed environment.

    %%bash git clone https://github.com/huggingface/transformers cd transformers deepspeed examples/pytorch/translation/run_translation.py ...
    [huggingface/peft] docs/source/accelerate/deepspeed.md

    Use PEFT and DeepSpeed with ZeRO3 for finetuning large models on multiple devices and multiple nodes

    This section of guide will help you learn how to use our DeepSpeed training script for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and ZeRO-3 on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.

    Configuration

    Start by running the following command to create a DeepSpeed configuration file with 🤗 Accelerate. The --config_file flag allows you to save the configuration file to a specific location, otherwise it is saved as a default_config.yaml file in the 🤗 Accelerate cache.

    The configuration file is used to set the default options when you launch the training script.

    accelerate config --config_file deepspeed_config.yaml

    You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 so make sure you pick those options.

    `zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning `gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them. Pass the same value as you would pass via cmd argument else you will encounter mismatch error. `gradient_clipping`: Enable gradient clipping with value. Don't set this as you will be passing it via cmd arguments. `offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2. Set this as `none` as don't want to enable offloading. `offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3. Set this as `none` as don't want to enable offloading. `zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3. Set this to `True`. `zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3. Set this to `True`. `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training. Set this to `True`.

    Once this is done, the corresponding config should look like below and you can find it in config folder at deepspeed_config.yaml:

    compute_environment: LOCAL_MACHINE debug: false deepspeed_config: deepspeed_multinode_launcher: standard gradient_accumulation_steps: 4 offload_optimizer_device: none offload_param_device: none zero3_init_flag: true zero3_save_16bit_model: true zero_stage: 3 distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 8 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false

    Launch command

    The launch command is available at run_peft_deepspeed.sh and it is also shown below:

    accelerate launch --config_file "configs/deepspeed_config.yaml" train.py \ --seed 100 \ --model_name_or_path "meta-llama/Llama-2-70b-hf" \ --dataset_name "smangrul/ultrachat-10k-chatml" \ --chat_template_format "chatml" \ --add_special_tokens False \ --append_concat_token False \ --splits "train,test" \ --max_seq_len 2048 \ --num_train_epochs 1 \ --logging_steps 5 \ --log_level "info" \ --logging_strategy "steps" \ --evaluation_strategy "epoch" \ --save_strategy "epoch" \ --push_to_hub \ --hub_private_repo True \ --hub_strategy "every_save" \ --bf16 True \ --packing True \ --learning_rate 1e-4 \ --lr_scheduler_type "cosine" \ --weight_decay 1e-4 \ --warmup_ratio 0.0 \ --max_grad_norm 1.0 \ --output_dir "llama-sft-lora-deepspeed" \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --gradient_accumulation_steps 4 \ --gradient_checkpointing True \ --use_reentrant False \ --dataset_text_field "content" \ --use_flash_attn True \ --use_peft_lora True \ --lora_r 8 \ --lora_alpha 16 \ --lora_dropout 0.1 \ --lora_target_modules "all-linear" \ --use_4bit_quantization False

    Notice that we are using LoRA with rank=8, alpha=16 and targeting all linear layers. We are passing the deepspeed config file and finetuning 70B Llama model on a subset of the ultrachat dataset.

    The important parts

    Let's dive a little deeper into the script so you can see what's going on, and understand how it works.

    The first thing to know is that the script uses DeepSpeed for distributed training as the DeepSpeed config has been passed. The SFTTrainer class handles all the heavy lifting of creating the PEFT model using the peft config that is passed. After that, when you call trainer.train(), SFTTrainer internally uses 🤗 Accelerate to prepare the model, optimizer and trainer using the DeepSpeed config to create DeepSpeed engine which is then trained. The main code snippet is below:

    # trainer trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, peft_config=peft_config, packing=data_args.packing, dataset_kwargs={ "append_concat_token": data_args.append_concat_token, "add_special_tokens": data_args.add_special_tokens, }, dataset_text_field=data_args.dataset_text_field, max_seq_length=data_args.max_seq_length, ) trainer.accelerator.print(f"{trainer.model}") # train checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint trainer.train(resume_from_checkpoint=checkpoint) # saving final model trainer.save_model()

    Memory usage

    In the above example, the memory consumed per GPU is 64 GB (80%) as seen in the screenshot below:

    <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_deepspeed_mem_usage.png"/> </div> <small>GPU memory usage for the training run</small>

    More resources

    You can also refer this blog post Falcon 180B Finetuning using 🤗 PEFT and DeepSpeed on how to finetune 180B Falcon model on 16 A100 GPUs on 2 machines.

    [huggingface/peft] examples/sft/configs/deepspeed_config_z3_qlora.yaml
    compute_environment: LOCAL_MACHINE debug: false deepspeed_config: deepspeed_multinode_launcher: standard offload_optimizer_device: none offload_param_device: none zero3_init_flag: true zero3_save_16bit_model: true zero_stage: 3 distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false
    [huggingface/accelerate] src/accelerate/commands/launch.py
    def deepspeed_launcher(args): import torch.distributed.run as distrib_run if not is_deepspeed_available(): raise ImportError("DeepSpeed is not installed => run `pip3 install deepspeed` or build it from source.") cmd, current_env = prepare_deepspeed_cmd_env(args) if not check_cuda_p2p_ib_support(): message = "Using RTX 4000 series which doesn't support faster communication speedups. Ensuring P2P and IB communications are disabled." warn = False if "NCCL_P2P_DISABLE" not in current_env: current_env["NCCL_P2P_DISABLE"] = "1" warn = True if "NCCL_IB_DISABLE" not in current_env: current_env["NCCL_IB_DISABLE"] = "1" warn = True if warn: logger.warning(message) if args.num_machines > 1 and args.deepspeed_multinode_launcher != DEEPSPEED_MULTINODE_LAUNCHERS[1]: with open(".deepspeed_env", "a") as f: for key, value in current_env.items(): if ";" in value or " " in value: continue f.write(f"{key}={value}\n") process = subprocess.Popen(cmd, env=current_env) process.wait() if process.returncode != 0: if not args.quiet: raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) else: sys.exit(1) else: debug = getattr(args, "debug", False) args = _filter_args( args, distrib_run.get_args_parser(), ["--training_script", args.training_script, "--training_script_args", args.training_script_args], ) with patch_environment(**current_env): try: distrib_run.run(args) except Exception: if is_rich_available() and debug: console = get_console() console.print("\n[bold red]Using --debug, `torch.distributed` Stack Trace:[/bold red]") console.print_exception(suppress=[__file__], show_locals=False) else: raise
    [huggingface/accelerate] docs/source/usage_guides/deepspeed.md

    Accelerate DeepSpeed Plugin

    On your machine(s) just run:

    accelerate config

    and answer the questions asked. It will ask whether you want to use a config file for DeepSpeed to which you should answer no. Then answer the following questions to generate a basic DeepSpeed config. This will generate a config file that will be used automatically to properly set the default options when doing

    accelerate launch my_script.py --args_to_my_script

    For instance, here is how you would run the NLP example examples/nlp_example.py (from the root of the repo) with DeepSpeed Plugin:

    ZeRO Stage-2 DeepSpeed Plugin Example

    compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: none offload_param_device: none zero3_init_flag: true zero_stage: 2 distributed_type: DEEPSPEED fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 use_cpu: false
    accelerate launch examples/nlp_example.py --mixed_precision fp16

    ZeRO Stage-3 with CPU Offload DeepSpeed Plugin Example

    compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: true zero3_save_16bit_model: true zero_stage: 3 distributed_type: DEEPSPEED fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 use_cpu: false
    accelerate launch examples/nlp_example.py --mixed_precision fp16

    Currently, Accelerate supports following config through the CLI:

    `zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning `gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them. `gradient_clipping`: Enable gradient clipping with value. `offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2. `offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3. `zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3. `zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3. `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training.

    To be able to tweak more options, you will need to use a DeepSpeed config file.

    [openaccess-ai-collective/axolotl] docs/multi-node.qmd
    ---
    title: Multi Node
    description: How to use Axolotl on multiple machines
    ---
    
    You will need to create a configuration for accelerate, either by using `accelerate config` and follow the instructions or you can use one of the preset below:
    
    ~/.cache/huggingface/accelerate/default_config.yaml
    ```yaml
    compute_environment: LOCAL_MACHINE
    debug: false
    distributed_type: FSDP
    downcast_bf16: 'no'
    machine_rank: 0 # Set to 0 for the main machine, increment by one for other machines
    main_process_ip: 10.0.0.4 # Set to main machine's IP
    main_process_port: 5000
    main_training_function: main
    mixed_precision: bf16
    num_machines: 2 # Change to the number of machines
    num_processes: 4 # That's the total number of GPUs, (for example: if you have 2 machines with 4 GPU, put 8)
    rdzv_backend: static
    same_network: true
    tpu_env: []
    tpu_use_cluster: false
    tpu_use_sudo: false
    use_cpu: false
    

    Configure your model to use FSDP with for example:

    fsdp: - full_shard - auto_wrap fsdp_config: fsdp_offload_params: true fsdp_state_dict_type: FULL_STATE_DICT fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer

    Machine configuration

    On each machine you need a copy of Axolotl, we suggest using the same commit to ensure compatibility.

    You will also need to have the same configuration file for your model on each machine.

    On the main machine only, make sure the port you set as main_process_port is open in TCP and reachable by other machines.

    All you have to do now is launch using accelerate as you would usually do on each machine and voila, the processes will start once you have launched accelerate on every machine.

    [openaccess-ai-collective/axolotl] README.md

    Environment

    Docker

    docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest

    Or run on the current files for development:

    docker compose up -d

    [!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide's section on Docker.

    <details> <summary>Docker advanced</summary>

    A more powerful Docker command to run would be this:

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-latest

    It additionally:

    • Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args.
    • Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args.
    • The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal.
    • The --privileged flag gives all capabilities to the container.
    • The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.

    More information on nvidia website

    </details>

    Conda/Pip venv

    1. Install python >=3.10

    2. Install pytorch stable https://pytorch.org/get-started/locally/

    3. Install Axolotl along with python dependencies

      pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'
    4. (Optional) Login to Huggingface to use gated models/datasets.

      huggingface-cli login

      Get the token at huggingface.co/settings/tokens

    Cloud GPU

    For cloud GPU providers that support docker images, use winglian/axolotl-cloud:main-latest

    Bare Metal Cloud GPU

    LambdaLabs
    <details> <summary>Click to Expand</summary>
    1. Install python
    sudo apt update sudo apt install -y python3.10 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1 sudo update-alternatives --config python # pick 3.10 if given option python -V # should be 3.10
    1. Install pip
    wget https://bootstrap.pypa.io/get-pip.py python get-pip.py
    1. Install Pytorch https://pytorch.org/get-started/locally/

    2. Follow instructions on quickstart.

    3. Run

    pip3 install protobuf==3.20.3 pip3 install -U --ignore-installed requests Pillow psutil scipy
    1. Set path
    export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
    </details>
    GCP
    <details> <summary>Click to Expand</summary>

    Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.

    Make sure to run the below to uninstall xla.

    pip uninstall -y torch_xla[tpu]
    </details>

    Windows

    Please use WSL or Docker!

    Mac

    Use the below instead of the install method in QuickStart.

    pip3 install -e '.'
    

    More info: mac.md

    Google Colab

    Please use this example notebook.

    Launching on public clouds via SkyPilot

    To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:

    pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds sky check

    Get the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:

    git clone https://github.com/skypilot-org/skypilot.git
    cd skypilot/llm/axolotl
    

    Use one command to launch:

    # On-demand HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN # Managed spot (auto-recovery on preemption) HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET
    [openaccess-ai-collective/axolotl] README.md

    Advanced Setup

    Environment

    Docker

    docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest

    Or run on the current files for development:

    docker compose up -d

    [!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide's section on Docker.

    <details> <summary>Docker advanced</summary>

    A more powerful Docker command to run would be this:

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-latest

    It additionally:

    • Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args.
    • Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args.
    • The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal.
    • The --privileged flag gives all capabilities to the container.
    • The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.

    More information on nvidia website

    </details>

    Conda/Pip venv

    1. Install python >=3.10

    2. Install pytorch stable https://pytorch.org/get-started/locally/

    3. Install Axolotl along with python dependencies

      pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'
    4. (Optional) Login to Huggingface to use gated models/datasets.

      huggingface-cli login

      Get the token at huggingface.co/settings/tokens

    Cloud GPU

    For cloud GPU providers that support docker images, use winglian/axolotl-cloud:main-latest

    Bare Metal Cloud GPU

    LambdaLabs
    <details> <summary>Click to Expand</summary>
    1. Install python
    sudo apt update sudo apt install -y python3.10 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1 sudo update-alternatives --config python # pick 3.10 if given option python -V # should be 3.10
    1. Install pip
    wget https://bootstrap.pypa.io/get-pip.py python get-pip.py
    1. Install Pytorch https://pytorch.org/get-started/locally/

    2. Follow instructions on quickstart.

    3. Run

    pip3 install protobuf==3.20.3 pip3 install -U --ignore-installed requests Pillow psutil scipy
    1. Set path
    export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
    </details>
    GCP
    <details> <summary>Click to Expand</summary>

    Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.

    Make sure to run the below to uninstall xla.

    pip uninstall -y torch_xla[tpu]
    </details>

    Windows

    Please use WSL or Docker!

    Mac

    Use the below instead of the install method in QuickStart.

    pip3 install -e '.'
    

    More info: mac.md

    Google Colab

    Please use this example notebook.

    Launching on public clouds via SkyPilot

    To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:

    pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds sky check

    Get the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:

    git clone https://github.com/skypilot-org/skypilot.git
    cd skypilot/llm/axolotl
    

    Use one command to launch:

    # On-demand HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN # Managed spot (auto-recovery on preemption) HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET

    Dataset

    Axolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.

    See these docs for more information on how to use different dataset formats.

    Config

    See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:

    • model

      base_model: ./llama-7b-hf # local or huggingface repo

      Note: The code will load the right architecture.

    • dataset

      datasets: # huggingface repo - path: vicgalle/alpaca-gpt4 type: alpaca # huggingface repo with specific configuration/subset - path: EleutherAI/pile name: enron_emails type: completion # format from earlier field: text # Optional[str] default: text, field to use for completion data # huggingface repo with multiple named configurations/subsets - path: bigcode/commitpackft name: - ruby - python - typescript type: ... # unimplemented custom format # fastchat conversation # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py - path: ... type: sharegpt conversation: chatml # default: vicuna_v1.1 # local - path: data.jsonl # or json ds_type: json # see other options below type: alpaca # dataset with splits, but no train split - path: knowrohit07/know_sql type: context_qa.load_v2 train_on_split: validation # loading from s3 or gcs # s3 creds will be loaded from the system default and gcs only supports public access - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs. ... # Loading Data From a Public URL # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly. - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP. ds_type: json # this is the default, see other options below.
    • loading

      load_in_4bit: true load_in_8bit: true bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically. fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32 tf32: true # require >=ampere bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision) float16: true # use instead of fp16 when you don't want AMP

      Note: Repo does not do 4-bit quantization.

    • lora

      adapter: lora # 'qlora' or leave blank for full finetune lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: - q_proj - v_proj

    All Config Options

    See these docs for all config options.

    Train

    Run

    accelerate launch -m axolotl.cli.train your_config.yml

    [!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml

    Preprocess dataset

    You can optionally pre-tokenize dataset with the following before finetuning. This is recommended for large datasets.

    • Set dataset_prepared_path: to a local folder for saving and loading pre-tokenized dataset.
    • (Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.
    • (Optional): Use --debug to see preprocessed examples.
    python -m axolotl.cli.preprocess your_config.yml

    Multi-GPU

    Below are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed is the recommended multi-GPU option currently because FSDP may experience loss instability.

    DeepSpeed

    Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU's VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated

    We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.

    deepspeed: deepspeed_configs/zero1.json
    accelerate launch -m axolotl.cli.train examples/llama-2/config.py --deepspeed deepspeed_configs/zero1.json
    FSDP
    • llama FSDP
    fsdp: - full_shard - auto_wrap fsdp_config: fsdp_offload_params: true fsdp_state_dict_type: FULL_STATE_DICT fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
    FSDP + QLoRA

    Axolotl supports training with FSDP and QLoRA, see these docs for more information.

    Weights & Biases Logging

    Make sure your WANDB_API_KEY environment variable is set (recommended) or you login to wandb with wandb login.

    • wandb options
    wandb_mode: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model:
    Special Tokens

    It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer's vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:

    special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" tokens: # these are delimiters - "<|im_start|>" - "<|im_end|>"

    When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer's vocabulary.

    Inference Playground

    Axolotl allows you to load your model in an interactive terminal playground for quick experimentation. The config file is the same config file used for training.

    Pass the appropriate flag to the inference command, depending upon what kind of model was trained:

    • Pretrained LORA:
      python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
    • Full weights finetune:
      python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
    • Full weights finetune w/ a prompt from a text file:
      cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \ --base_model="./completed-model" --prompter=None --load_in_8bit=True

    -- With gradio hosting

    python -m axolotl.cli.inference examples/your_config.yml --gradio

    Please use --sample_packing False if you have it on and receive the error similar to below:

    RuntimeError: stack expects each tensor to be equal size, but got [1, 32, 1, 128] at entry 0 and [1, 32, 8, 128] at entry 1

    Merge LORA to base

    The following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.

    python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"

    You may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with

    CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...

    although this will be very slow, and using the config options above are recommended instead.

    [openaccess-ai-collective/axolotl] docs/debugging.qmd
    ---
    title: Debugging
    description: How to debug Axolotl
    ---
    
    
    This document provides some tips and tricks for debugging Axolotl.  It also provides an example configuration for debugging with VSCode.  A good debugging setup is essential to understanding how Axolotl code works behind the scenes.
    
    ## Table of Contents
    
    - [General Tips](#general-tips)
    - [Debugging with VSCode](#debugging-with-vscode)
        - [Background](#background)
        - [Configuration](#configuration)
        - [Customizing your debugger](#customizing-your-debugger)
        - [Video Tutorial](#video-tutorial)
    - [Debugging With Docker](#debugging-with-docker)
        - [Setup](#setup)
        - [Attach To Container](#attach-to-container)
        - [Video - Attaching To Docker On Remote Host](#video---attaching-to-docker-on-remote-host)
    
    ## General Tips
    
    While debugging it's helpful to simplify your test scenario as much as possible.  Here are some tips for doing so:
    
    > [!Important]
    > All of these tips are incorporated into the [example configuration](#configuration) for debugging with VSCode below.
    
    1. **Make sure you are using the latest version of axolotl**:  This project changes often and bugs get fixed fast.  Check your git branch and make sure you have pulled the latest changes from `main`.
    1. **Eliminate concurrency**: Restrict the number of processes to 1 for both training and data preprocessing:
        - Set `CUDA_VISIBLE_DEVICES` to a single GPU, ex: `export CUDA_VISIBLE_DEVICES=0`.
        - Set `dataset_processes: 1` in your axolotl config or run the training command with `--dataset_processes=1`.
    2. **Use a small dataset**: Construct or use a small dataset from HF Hub. When using a small dataset, you will often have to make sure `sample_packing: False` and `eval_sample_packing: False` to avoid errors.  If you are in a pinch and don't have time to construct a small dataset but want to use from the HF Hub, you can shard the data (this will still tokenize the entire dataset, but will only use a fraction of the data for training.  For example, to shard the dataset into 20 pieces, add the following to your axolotl config):
        ```yaml
        dataset:
            ...
            shards: 20
        ```
    3. **Use a small model**: A good example of a small model is [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
    4. **Minimize iteration time**: Make sure the training loop finishes as fast as possible, with these settings.
        - `micro_batch_size: 1`
        - `max_steps: 1`
        - `val_set_size: 0`
    5. **Clear Caches:** Axolotl caches certain steps and so does the underlying HuggingFace trainer.  You may want to clear some of these caches when debugging.
        - Data preprocessing: When debugging data preprocessing, which includes prompt template formation, you may want to delete the directory set in `dataset_prepared_path:` in your axolotl config.  If you didn't set this value, the default is `last_run_prepared`.
        - HF Hub: If you are debugging data preprocessing, you should clear the relevant HF cache [HuggingFace cache](https://huggingface.co/docs/datasets/cache), by deleting the appropriate `~/.cache/huggingface/datasets/...` folder(s).
        - **The recommended approach is to redirect all outputs and caches to a temporary folder and delete selected subfolders before each run.  This is demonstrated in the example configuration below.**
    
    
    ## Debugging with VSCode
    
    ### Background
    
    The below example shows how to configure VSCode to debug data preprocessing of the `sharegpt` format.  This is the format used when you have the following in your axolotl config:
    
    ```yaml
    datasets:
      - path: <path to your sharegpt formatted dataset> # example on HF Hub: philschmid/guanaco-sharegpt-style
        type: sharegpt
    

    [!Important] If you are already familiar with advanced VSCode debugging, you can skip the below explanation and look at the files .vscode/launch.json and .vscode/tasks.json for an example configuration.

    [!Tip] If you prefer to watch a video, rather than read, you can skip to the video tutorial below (but doing both is recommended).

    Setup

    Make sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project:

    pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'

    Remote Hosts

    If you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging.

    Configuration

    The easiest way to get started is to modify the .vscode/launch.json file in this project. This is just an example configuration, so you may need to modify or copy it to suit your needs.

    For example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_sharegpt.yml, you would use the below configuration[^1]. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.

    // .vscode/launch.json { "version": "0.2.0", "configurations": [ { "name": "Debug axolotl prompt - sharegpt", "type": "python", "module": "accelerate.commands.launch", "request": "launch", "args": [ "-m", "axolotl.cli.train", "dev_sharegpt.yml", // The flags below simplify debugging by overriding the axolotl config // with the debugging tips above. Modify as needed. "--dataset_processes=1", // limits data preprocessing to one process "--max_steps=1", // limits training to just one step "--batch_size=1", // minimizes batch size "--micro_batch_size=1", // minimizes batch size "--val_set_size=0", // disables validation "--sample_packing=False", // disables sample packing which is necessary for small datasets "--eval_sample_packing=False",// disables sample packing on eval set "--dataset_prepared_path=temp_debug/axolotl_outputs/data", // send data outputs to a temp folder "--output_dir=temp_debug/axolotl_outputs/model" // send model outputs to a temp folder ], "console": "integratedTerminal", // show output in the integrated terminal "cwd": "${workspaceFolder}/devtools", // set working directory to devtools from the root of the project "justMyCode": true, // step through only axolotl code "env": {"CUDA_VISIBLE_DEVICES": "0", // Since we aren't doing distributed training, we need to limit to one GPU "HF_HOME": "${workspaceFolder}/devtools/temp_debug/.hf-cache"}, // send HF cache to a temp folder "preLaunchTask": "cleanup-for-dataprep", // delete temp folders (see below) } ] }

    Additional notes about this configuration:

    • The argument justMyCode is set to true such that you step through only the axolotl code. If you want to step into dependencies, set this to false.
    • The preLaunchTask: cleanup-for-dataprep is defined in .vscode/tasks.json and is used to delete the following folders before debugging, which is essential to ensure that the data pre-processing code is run from scratch:
      • ./devtools/temp_debug/axolotl_outputs
      • ./devtools/temp_debug/.hf-cache/datasets

    [!Tip] You may not want to delete these folders. For example, if you are debugging model training instead of data pre-processing, you may NOT want to delete the cache or output folders. You may also need to add additional tasks to the tasks.json file depending on your use case.

    Below is the ./vscode/tasks.json file that defines the cleanup-for-dataprep task. This task is run before each debugging session when you use the above configuration. Note how there are two tasks that delete the two folders mentioned above. The third task cleanup-for-dataprep is a composite task that combines the two tasks. A composite task is necessary because VSCode does not allow you to specify multiple tasks in the preLaunchTask argument of the launch.json file.

    // .vscode/tasks.json // this file is used by launch.json { "version": "2.0.0", "tasks": [ // this task changes into the devtools directory and deletes the temp_debug/axolotl_outputs folder { "label": "delete-outputs", "type": "shell", "command": "rm -rf temp_debug/axolotl_outputs", "options":{ "cwd": "${workspaceFolder}/devtools"}, "problemMatcher": [] }, // this task changes into the devtools directory and deletes the `temp_debug/.hf-cache/datasets` folder { "label": "delete-temp-hf-dataset-cache", "type": "shell", "command": "rm -rf temp_debug/.hf-cache/datasets", "options":{ "cwd": "${workspaceFolder}/devtools"}, "problemMatcher": [] }, // this task combines the two tasks above { "label": "cleanup-for-dataprep", "dependsOn": ["delete-outputs", "delete-temp-hf-dataset-cache"], } ] }

    Customizing your debugger

    Your debugging use case may differ from the example above. The easiest thing to do is to put your own axolotl config in the devtools folder and modify the launch.json file to use your config. You may also want to modify the preLaunchTask to delete different folders or not delete anything at all.

    Video Tutorial

    The following video tutorial walks through the above configuration and demonstrates how to debug with VSCode, (click the image below to watch):

    <div style="text-align: center; line-height: 0;">

    <a href="https://youtu.be/xUUB11yeMmc" target="_blank" title="How to debug Axolotl (for fine tuning LLMs)"><img src="https://i.ytimg.com/vi/xUUB11yeMmc/maxresdefault.jpg" style="border-radius: 10px; display: block; margin: auto;" width="560" height="315" /></a>

    <figcaption style="font-size: smaller;"><a href="https://hamel.dev">Hamel Husain's</a> tutorial: <a href="https://www.youtube.com/watch?v=xUUB11yeMmc">Debugging Axolotl w/VSCode</a></figcaption> </div> <br>

    Debugging With Docker

    Using official Axolotl Docker images is a great way to debug your code, and is a very popular way to use Axolotl. Attaching VSCode to Docker takes a few more steps.

    Setup

    On the host that is running axolotl (ex: if you are using a remote host), clone the axolotl repo and change your current directory to the root:

    git clone https://github.com/OpenAccess-AI-Collective/axolotl cd axolotl

    [!Tip] If you already have axolotl cloned on your host, make sure you have the latest changes and change into the root of the project.

    Next, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:[^2]

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-py3.10-cu118-2.0.1

    [!Tip] To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl's Docker CI builds.

    You will now be in the container. Next, perform an editable install of Axolotl:

    pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'

    Attach To Container

    Next, if you are using a remote host, Remote into this host with VSCode. If you are using a local host, you can skip this step.

    Next, select Dev Containers: Attach to Running Container... using the command palette (CMD + SHIFT + P) in VSCode. You will be prompted to select a container to attach to. Select the container you just created. You will now be in the container with a working directory that is at the root of the project. Any changes you make to the code will be reflected both in the container and on the host.

    Now you are ready to debug as described above (see Debugging with VSCode).

    Video - Attaching To Docker On Remote Host

    Here is a short video that demonstrates how to attach to a Docker container on a remote host:

    <div style="text-align: center; line-height: 0;">

    <a href="https://youtu.be/0AuoR7QnHR0" target="_blank" title="Debugging Axolotl Part 2: Attaching to Docker on a Remote Host"><img src="https://i.ytimg.com/vi/0AuoR7QnHR0/hqdefault.jpg" style="border-radius: 10px; display: block; margin: auto;" width="560" height="315" /></a>

    <figcaption style="font-size: smaller;"><a href="https://hamel.dev">Hamel Husain's</a> tutorial: <a href="https://youtu.be/0AuoR7QnHR0">Debugging Axolotl Part 2: Attaching to Docker on a Remote Host </a></figcaption> </div> <br>

    [^1]: The config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/sharegpt.yml, but this is the same thing.

    [^2]: Many of the below flags are recommended best practices by Nvidia when using nvidia-container-toolkit. You can read more about these flags here.

    [openaccess-ai-collective/axolotl] examples/jamba/qlora_deepspeed.yaml
    base_model: ai21labs/Jamba-v0.1 trust_remote_code: true load_in_8bit: false load_in_4bit: true strict: false datasets: - path: mhenrichsen/alpaca_2k_test type: alpaca dataset_prepared_path: val_set_size: 0.0 output_dir: ./out sequence_len: 4096 sample_packing: false pad_to_sequence_len: false eval_sample_packing: false wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: adapter: qlora lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true low_cpu_mem_usage: true gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.00001 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: saves_per_epoch: 1 debug: deepspeed: deepspeed_configs/zero2.json weight_decay: 0.0 special_tokens:
    [openaccess-ai-collective/axolotl] cicd/Dockerfile.jinja
    FROM winglian/axolotl-base:{{ BASE_TAG }}
    
    ENV TORCH_CUDA_ARCH_LIST="7.0 7.5 8.0 8.6+PTX"
    ENV AXOLOTL_EXTRAS="{{ AXOLOTL_EXTRAS }}"
    ENV AXOLOTL_ARGS="{{ AXOLOTL_ARGS }}"
    ENV CUDA="{{ CUDA }}"
    ENV BNB_CUDA_VERSION="{{ CUDA }}"
    ENV PYTORCH_VERSION="{{ PYTORCH_VERSION }}"
    ENV GITHUB_REF="{{ GITHUB_REF }}"
    ENV GITHUB_SHA="{{ GITHUB_SHA }}"
    
    RUN apt-get update && \
        apt-get install -y --allow-change-held-packages vim curl nano libnccl2 libnccl-dev
    
    WORKDIR /workspace
    
    RUN git clone --depth=1 https://github.com/OpenAccess-AI-Collective/axolotl.git
    
    WORKDIR /workspace/axolotl
    
    RUN git fetch origin +$GITHUB_REF && \
        git checkout FETCH_HEAD
    
    # If AXOLOTL_EXTRAS is set, append it in brackets
    RUN pip install causal_conv1d
    RUN if [ "$AXOLOTL_EXTRAS" != "" ] ; then \
            pip install -e .[deepspeed,flash-attn,mamba-ssm,galore,$AXOLOTL_EXTRAS] $AXOLOTL_ARGS; \
        else \
            pip install -e .[deepspeed,flash-attn,mamba-ssm,galore] $AXOLOTL_ARGS; \
        fi
    
    # So we can test the Docker image
    RUN pip install pytest
    
    # fix so that git fetch/pull from remote works
    RUN git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*" && \
        git config --get remote.origin.fetch
    
    # helper for huggingface-login cli
    RUN git config --global credential.helper store
    
    
OpenAccess-AI-Collective/axolotl
huggingface/transformers
huggingface/peft
huggingface/accelerate