Phorm.ai
OpenAccess-AI-Collective/axolotl

    how do I set and offload_dir when merging a lora?

    To set an offload_dir when merging a LoRA (Low-Rank Adaptation) model, you would typically need to specify the directory where you want the model's state dictionary to be offloaded. This process involves saving the model's parameters to disk, which can be useful for reducing memory usage, especially when working with large models. However, the direct setting of an offload_dir during the merge operation is not explicitly provided in the LoRA merging process as described in the provided code snippets from the peft library.

    Instead, offloading to a directory is more directly handled in utilities related to model offloading, such as those found in the accelerate library. For instance, the offload_state_dict function from accelerate allows offloading a model's state dictionary to a specified directory. While merging a LoRA model does not inherently include an offload_dir parameter in its function signature, you can manually offload the model's state dictionary before or after the merge operation if needed.

    Here's a conceptual approach to offloading a model's state dictionary to a specified directory after merging LoRA adaptations:

    from peft.tuners.lora.model import LoraModel from accelerate.utils.offload import offload_state_dict import os # Assuming `model` is your LoRA adapted model instance # and `lora_model` is an instance of LoraModel wrapping your base model # Perform the merge operation merged_model = lora_model.merge_and_unload() # Specify your offload directory offload_dir = "path/to/your/offload_dir" if not os.path.exists(offload_dir): os.makedirs(offload_dir) # Offload the merged model's state dictionary offload_state_dict(offload_dir, merged_model.state_dict())

    This approach manually offloads the merged model's state dictionary to the specified directory. It's important to note that this process is separate from the merging operation itself and is handled after the model has been merged.

    Remember, the actual implementation details might vary based on the specific versions of the libraries (peft, accelerate) you are using and the exact requirements of your project. Always refer to the latest documentation of the libraries for the most accurate and up-to-date information.

    Search results
    [openaccess-ai-collective/axolotl] src/axolotl/cli/merge_lora.py
    def do_cli(config: Path = Path("examples/"), **kwargs): # pylint: disable=duplicate-code print_axolotl_text_art() parser = transformers.HfArgumentParser((TrainerCliArgs)) parsed_cli_args, _ = parser.parse_args_into_dataclasses( return_remaining_strings=True ) parsed_cli_args.merge_lora = True parsed_cfg = load_cfg( config, merge_lora=True, load_in_8bit=False, load_in_4bit=False, flash_attention=False, deepspeed=None, fsdp=None, **kwargs, ) if not parsed_cfg.lora_model_dir and parsed_cfg.output_dir: parsed_cfg.lora_model_dir = parsed_cfg.output_dir if not Path(parsed_cfg.lora_model_dir).exists(): raise ValueError( f"Target directory for merge: `{parsed_cfg.lora_model_dir}` does not exist." ) parsed_cfg.load_in_4bit = False parsed_cfg.load_in_8bit = False parsed_cfg.flash_attention = False parsed_cfg.deepspeed = None parsed_cfg.fsdp = None do_merge_lora(cfg=parsed_cfg, cli_args=parsed_cli_args)
    [openaccess-ai-collective/axolotl] src/axolotl/cli/__init__.py
    def do_merge_lora( *, cfg: DictDefault, cli_args: TrainerCliArgs, ): model, tokenizer = load_model_and_tokenizer(cfg=cfg, cli_args=cli_args) safe_serialization = cfg.save_safetensors is True LOG.info("running merge of LoRA with base model") model = model.merge_and_unload(progressbar=True) try: model.to(dtype=cfg.torch_dtype) except RuntimeError: pass model.generation_config.do_sample = True if cfg.local_rank == 0: LOG.info(f"saving merged model to: {str(Path(cfg.output_dir) / 'merged')}") model.save_pretrained( str(Path(cfg.output_dir) / "merged"), safe_serialization=safe_serialization, progressbar=True, ) tokenizer.save_pretrained(str(Path(cfg.output_dir) / "merged"))
    [openaccess-ai-collective/axolotl] src/axolotl/cli/merge_lora.py
    """ CLI to run merge a trained LoRA into a base model """
    [huggingface/peft] src/peft/tuners/lora/model.py
    def unload(self) -> torch.nn.Module: """ Gets back the base model by removing all the lora modules without merging. This gives back the original base model. """ return self._unload_and_optionally_merge(merge=False)
    [huggingface/peft] src/peft/tuners/lora/model.py
    def _unload_and_optionally_merge( self, merge=True, progressbar: bool = False, safe_merge: bool = False, adapter_names: Optional[list[str]] = None, ): if merge: self._check_merge_allowed() key_list = [key for key, _ in self.model.named_modules() if self.prefix not in key] desc = "Unloading " + ("and merging " if merge else "") + "model" for key in tqdm(key_list, disable=not progressbar, desc=desc): try: parent, target, target_name = _get_submodules(self.model, key) except AttributeError: continue with onload_layer(target): if hasattr(target, "base_layer"): if merge: target.merge(safe_merge=safe_merge, adapter_names=adapter_names) self._replace_module(parent, target_name, target.get_base_layer(), target) elif isinstance(target, ModulesToSaveWrapper): # save any additional trainable modules part of `modules_to_save` new_module = target.modules_to_save[target.active_adapter] if hasattr(new_module, "base_layer"): # check if the module is itself a tuner layer if merge: new_module.merge(safe_merge=safe_merge, adapter_names=adapter_names) new_module = new_module.get_base_layer() setattr(parent, target_name, new_module) return self.model
    [huggingface/peft] src/peft/tuners/lora/model.py
    def merge_and_unload( self, progressbar: bool = False, safe_merge: bool = False, adapter_names: Optional[list[str]] = None ) -> torch.nn.Module: r""" This method merges the LoRa layers into the base model. This is needed if someone wants to use the base model as a standalone model. Args: progressbar (`bool`): whether to show a progressbar indicating the unload and merge process safe_merge (`bool`): whether to activate the safe merging check to check if there is any potential Nan in the adapter weights adapter_names (`List[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. Example: ```py >>> from transformers import AutoModelForCausalLM >>> from peft import PeftModel >>> base_model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b") >>> peft_model_id = "smangrul/falcon-40B-int4-peft-lora-sfttrainer-sample" >>> model = PeftModel.from_pretrained(base_model, peft_model_id) >>> merged_model = model.merge_and_unload() ``` """ return self._unload_and_optionally_merge( progressbar=progressbar, safe_merge=safe_merge, adapter_names=adapter_names )
    [huggingface/peft] src/peft/tuners/lora/hqq.py
    def merge(self, safe_merge: bool = False, adapter_names: Optional[list[str]] = None) -> None: """ Merge the active adapter weights into the base weights Args: safe_merge (`bool`, *optional*): If True, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to `False`. adapter_names (`list[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. """ adapter_names = check_adapters_to_merge(self, adapter_names) if not adapter_names: # no adapter to merge return for active_adapter in adapter_names: if active_adapter not in self.lora_A.keys(): continue layer = self.get_base_layer() quant_config = {**copy.deepcopy(layer.quant_config), "offload_meta": layer.offload_meta} lora_data = self.get_delta_weight(active_adapter) output = layer.dequantize() if not self.use_dora[active_adapter]: w_data = output + lora_data else: # handle dora # since output already includes scaling, set it to 1 here weight_norm = self._get_weight_norm(output, lora_data, scaling=1).detach() # We need to cache weight_norm because it has to be based on the original weights. We # cannot calculate it on the fly based on the merged weights when unmerging because its a # different value self._cache_store(f"{active_adapter}-weight_norm", weight_norm) dora_factor = self.lora_magnitude_vector[active_adapter] / weight_norm w_data = dora_factor.view(-1, 1) * (output + lora_data) if safe_merge and not torch.isfinite(w_data).all(): raise ValueError( f"NaNs detected in the merged weights. The adapter {active_adapter} seems to be broken" ) new_hqq_layer = HQQLinear(None, quant_config, compute_dtype=layer.compute_dtype, device=layer.device) quant_config.pop("offload_meta", None) new_hqq_layer.quantize(w_data, **quant_config) self.base_layer = new_hqq_layer self.merged_adapters.append(active_adapter)
    [openaccess-ai-collective/axolotl] examples/falcon/config-7b-lora.yml
    base_model: tiiuae/falcon-7b trust_remote_code: true model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: true load_in_4bit: false gptq: false strict: false push_dataset_to_hub: datasets: - path: teknium/GPT4-LLM-Cleaned type: alpaca:chat dataset_prepared_path: val_set_size: 0.05 adapter: lora lora_model_dir: sequence_len: 2048 max_packed_sequence_len: lora_r: 16 lora_alpha: 32 lora_dropout: 0.0 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: output_dir: ./falcon-7b batch_size: 2 micro_batch_size: 1 num_epochs: 4 optimizer: adamw_bnb_8bit torchdistx_path: lr_scheduler: cosine learning_rate: 0.00003 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: true flash_attention: gptq_groupsize: gptq_model_v1: warmup_steps: 40 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: "<|endoftext|>" bos_token: "<|endoftext|>" eos_token: "<|endoftext|>"
    [openaccess-ai-collective/axolotl] src/axolotl/monkeypatch/relora.py
    def merge_and_save( model: peft.LoraModel, model_src: str, model_dst: str, reinit: bool = False, quantized: bool = False, cpu_offload: bool = False, actually_save: bool = True, ): modules = find_lora_modules(model) if not quantized: for module_name, target in modules.items(): active_adapter = target.active_adapter if isinstance(active_adapter, list): active_adapter = active_adapter[0] update = target.get_delta_weight(active_adapter).detach() target.weight.data += update if reinit: for adapter_name in target.lora_A: target.reset_lora_parameters(adapter_name, True) for adapter_name in target.lora_embedding_A: target.reset_lora_parameters(adapter_name, True) return os.makedirs(model_dst, exist_ok=True) shard_paths = sharded_paths(model_src, modules.keys()) out_shard_paths = {} unique_shards = list(set(shard_paths.values())) for shard_path in unique_shards: out_tensors = {} if shard_path.endswith(".safetensors"): in_tensors = st.load_file(str(Path(model_src) / shard_path)) else: in_tensors = torch.load(Path(model_src) / shard_path) if "state_dict" in in_tensors: in_tensors = in_tensors["state_dict"] for module_name, target in modules.items(): key = module_name + ".weight" if key not in shard_paths or shard_paths[key] != shard_path: continue orig_weight = in_tensors[key] old_dev = target.weight.device math_dev = "cpu" if cpu_offload else old_dev delta_weight = lora_delta_weight(target, math_dev) new_weight = orig_weight.to(math_dev) + delta_weight del delta_weight if actually_save: out_tensors[key] = new_weight.half().cpu() update_weights(target, new_weight, reinit=reinit, device=old_dev) if actually_save: out_shard_name = shard_path if out_shard_name.startswith("pytorch_model"): out_shard_name = ( out_shard_name.replace("pytorch_model", "model").rstrip(".bin") + ".safetensors" ) for module_name in in_tensors: if module_name not in out_tensors: out_tensors[module_name] = in_tensors[module_name].half() out_shard_paths[module_name] = out_shard_name shard_fn = str(Path(model_dst) / out_shard_name) LOG.info(f"saving tensors to {shard_fn}") st.save_file(out_tensors, shard_fn, metadata={"format": "pt"}) barrier() del in_tensors del out_tensors torch.cuda.empty_cache() if actually_save and len(unique_shards) > 1: with open( str(Path(model_dst, "model.safetensors.index.json")), "w", encoding="utf-8" ) as file: json.dump({"metadata": {}, "weight_map": out_shard_paths}, file)
    [openaccess-ai-collective/axolotl] examples/openllama-3b/lora.yml
    base_model: openlm-research/open_llama_3b_v2 model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: true load_in_4bit: false strict: false push_dataset_to_hub: datasets: - path: teknium/GPT4-LLM-Cleaned type: alpaca dataset_prepared_path: val_set_size: 0.02 adapter: lora lora_model_dir: sequence_len: 1024 sample_packing: true lora_r: 8 lora_alpha: 16 lora_dropout: 0.0 lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: output_dir: ./lora-out gradient_accumulation_steps: 1 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit torchdistx_path: lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: false fp16: true tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true gptq_groupsize: s2_attention: gptq_model_v1: warmup_steps: 20 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>"
    [huggingface/peft] src/peft/tuners/lora/model.py
    whether to activate the safe merging check to check if there is any potential Nan in the adapter weights adapter_names (`List[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. Example: ```py >>> from transformers import AutoModelForCausalLM >>> from peft import PeftModel >>> base_model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b") >>> peft_model_id = "smangrul/falcon-40B-int4-peft-lora-sfttrainer-sample" >>> model = PeftModel.from_pretrained(base_model, peft_model_id) >>> merged_model = model.merge_and_unload() ``` """ return self._unload_and_optionally_merge( progressbar=progressbar, safe_merge=safe_merge, adapter_names=adapter_names ) def unload(self) -> torch.nn.Module: """ Gets back the base model by removing all the lora modules without merging. This gives back the original base model. """ return self._unload_and_optionally_merge(merge=False)
    [huggingface/peft] src/peft/tuners/lora/hqq.py
    def unmerge(self) -> None: """ This method unmerges all merged adapter layers from the base weights. """ if not self.merged: warnings.warn("Already unmerged. Nothing to do.") return while len(self.merged_adapters) > 0: active_adapter = self.merged_adapters.pop() if active_adapter not in self.lora_A.keys(): continue lora_data = self.get_delta_weight(active_adapter) layer = self.get_base_layer() quant_config = {**copy.deepcopy(layer.quant_config), "offload_meta": layer.offload_meta} output = layer.dequantize() if not self.use_dora[active_adapter]: w_data = output - lora_data else: weight_norm = self._cache_pop(f"{active_adapter}-weight_norm") dora_factor = self.lora_magnitude_vector[active_adapter] / weight_norm w_data = output.data / dora_factor.view(-1, 1) - lora_data new_hqq_layer = HQQLinear(None, quant_config, compute_dtype=layer.compute_dtype, device=layer.device) quant_config.pop("offload_meta", None) new_hqq_layer.quantize(w_data, **quant_config) self.base_layer = new_hqq_layer
    [openaccess-ai-collective/axolotl] examples/replit-3b/config-lora.yml
    base_model: replit/replit-code-v1-3b trust_remote_code: true load_in_8bit: false datasets: - path: vicgalle/alpaca-gpt4 type: alpaca dataset_prepared_path: val_set_size: 0.05 adapter: lora lora_model_dir: sequence_len: 2048 max_packed_sequence_len: lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: - Wqkv - mlp_up - mlp_down lora_fan_in_fan_out: wandb_project: lora-replit wandb_entity: wandb_watch: wandb_name: wandb_log_model: output_dir: ./lora-replit batch_size: 8 micro_batch_size: 1 num_epochs: 4 optimizer: torchdistx_path: lr_scheduler: learning_rate: 0.00001 train_on_inputs: false group_by_length: false bf16: auto tf32: true gradient_checkpointing: early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: gptq_groupsize: gptq_model_v1: warmup_steps: 20 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0 fsdp: fsdp_config: #special_tokens:
    [huggingface/peft] src/peft/tuners/lora/layer.py
    def merge(self, safe_merge: bool = False, adapter_names: Optional[list[str]] = None) -> None: """ Merge the active adapter weights inside the base weights Args: safe_merge (`bool`, *optional*): If True, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to `False`. adapter_names (`list[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. """ adapter_names = check_adapters_to_merge(self, adapter_names) if not adapter_names: # no adapter to merge return for active_adapter in adapter_names: if active_adapter in self.lora_A.keys(): base_layer = self.get_base_layer() if safe_merge: # Note that safe_merge will be slower than the normal merge # because of the copy operation. orig_weights = base_layer.weight.data.clone() delta_weight = self.get_delta_weight(active_adapter) if not self.use_dora[active_adapter]: orig_weights = orig_weights + delta_weight else: # handle dora # since delta_weight already includes scaling, set it to 1 here weight_norm = self._get_weight_norm(orig_weights, delta_weight, scaling=1).detach() # We need to cache weight_norm because it has to be based on the original weights. We # cannot calculate it on the fly based on the merged weights when unmerging because its a # different value self._cache_store(f"{active_adapter}-weight_norm", weight_norm) dora_factor = self.lora_magnitude_vector[active_adapter] / weight_norm orig_weights = dora_factor.view(-1, 1, 1, 1) * (orig_weights + delta_weight) if not torch.isfinite(orig_weights).all(): raise ValueError( f"NaNs detected in the merged weights. The adapter {active_adapter} seems to be broken" ) base_layer.weight.data = orig_weights else: delta_weight = self.get_delta_weight(active_adapter) if not self.use_dora[active_adapter]: base_layer.weight.data = base_layer.weight.data + delta_weight else: # handle dora # since delta_weight already includes scaling, set it to 1 here weight_norm = self._get_weight_norm(base_layer.weight, delta_weight, scaling=1).detach() # We need to cache weight_norm because it has to be based on the original weights. We # cannot calculate it on the fly based on the merged weights when unmerging because its a # different value self._cache_store(f"{active_adapter}-weight_norm", weight_norm) dora_factor = self.lora_magnitude_vector[active_adapter] / weight_norm new_weight = dora_factor.view(-1, 1, 1, 1) * (base_layer.weight.data + delta_weight) base_layer.weight.data = new_weight self.merged_adapters.append(active_adapter)
    [openaccess-ai-collective/axolotl] examples/code-llama/34b/lora.yml
    base_model: codellama/CodeLlama-34b-hf model_type: LlamaForCausalLM tokenizer_type: CodeLlamaTokenizer load_in_8bit: true load_in_4bit: false strict: false datasets: - path: mhenrichsen/alpaca_2k_test type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./lora-out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>"
    [openaccess-ai-collective/axolotl] examples/dbrx/16bit-lora.yaml
    base_model: LnL-AI/dbrx-base-converted-v2 trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: tatsu-lab/alpaca type: alpaca dataset_prepared_path: last_run_prepared val_set_size: 0.0 output_dir: ./out sequence_len: 512 sample_packing: false pad_to_sequence_len: false wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: adapter: lora lora_model_dir: lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 # w1, w2, & v1 will hang the trainer lora_target_modules: - q_proj # attn - k_proj # attn - v_proj # attn - out_proj # attn - layer # router # - w1 # - w2 # - v1 gradient_accumulation_steps: 1 micro_batch_size: 1 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: false # don't use with fsdp_activation_checkpointing gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: saves_per_epoch: 1 debug: weight_decay: 0.0 fsdp: - full_shard - auto_wrap fsdp_config: fsdp_limit_all_gathers: true fsdp_sync_module_states: true fsdp_offload_params: false fsdp_use_orig_params: false fsdp_cpu_ram_efficient_loading: true fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_transformer_layer_cls_to_wrap: DbrxBlock fsdp_state_dict_type: FULL_STATE_DICT fsdp_activation_checkpointing: true
    [openaccess-ai-collective/axolotl] examples/code-llama/13b/lora.yml
    base_model: codellama/CodeLlama-13b-hf model_type: LlamaForCausalLM tokenizer_type: CodeLlamaTokenizer load_in_8bit: true load_in_4bit: false strict: false datasets: - path: mhenrichsen/alpaca_2k_test type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./lora-out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>"
    [huggingface/accelerate] src/accelerate/utils/offload.py
    def offload_state_dict(save_dir: Union[str, os.PathLike], state_dict: Dict[str, torch.Tensor]): """ Offload a state dict in a given folder. Args: save_dir (`str` or `os.PathLike`): The directory in which to offload the state dict. state_dict (`Dict[str, torch.Tensor]`): The dictionary of tensors to offload. """ os.makedirs(save_dir, exist_ok=True) index = {} for name, parameter in state_dict.items(): index = offload_weight(parameter, name, save_dir, index=index) # Update index save_offload_index(index, save_dir)
    [huggingface/accelerate] src/accelerate/big_modeling.py
    def disk_offload( model: nn.Module, offload_dir: Union[str, os.PathLike], execution_device: Optional[torch.device] = None, offload_buffers: bool = False, preload_module_classes: Optional[List[str]] = None, ): """ Activates full disk offload for a model. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again. Args: model (`torch.nn.Module`): The model to offload. offload_dir (`str` or `os.PathLike`): The folder in which to offload the model weights (or where the model weights are already offloaded). execution_device (`torch.device`, *optional*): The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model's first parameter device. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to offload the buffers with the model parameters. preload_module_classes (`List[str]`, *optional*): A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a `dense` linear layer is registered, but at forward, `dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly. """ if not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")): offload_state_dict(offload_dir, model.state_dict()) if execution_device is None: execution_device = next(iter(model.parameters())).device weights_map = OffloadedWeightsLoader(save_folder=offload_dir) add_hook_to_module(model, AlignDevicesHook(io_same_device=True), append=True) attach_align_device_hook( model, execution_device=execution_device, offload=True, offload_buffers=offload_buffers, weights_map=weights_map, preload_module_classes=preload_module_classes, ) return model
    [huggingface/transformers] src/transformers/commands/lfs.py
    class LfsUploadCommand: def __init__(self, args): self.args = args def run(self): # Immediately after invoking a custom transfer process, git-lfs # sends initiation data to the process over stdin. # This tells the process useful information about the configuration. init_msg = json.loads(sys.stdin.readline().strip()) if not (init_msg.get("event") == "init" and init_msg.get("operation") == "upload"): write_msg({"error": {"code": 32, "message": "Wrong lfs init operation"}}) sys.exit(1) # The transfer process should use the information it needs from the # initiation structure, and also perform any one-off setup tasks it # needs to do. It should then respond on stdout with a simple empty # confirmation structure, as follows: write_msg({}) # After the initiation exchange, git-lfs will send any number of # transfer requests to the stdin of the transfer process, in a serial sequence. while True: msg = read_msg() if msg is None: # When all transfers have been processed, git-lfs will send # a terminate event to the stdin of the transfer process. # On receiving this message the transfer process should # clean up and terminate. No response is expected. sys.exit(0) oid = msg["oid"] filepath = msg["path"] completion_url = msg["action"]["href"] header = msg["action"]["header"] chunk_size = int(header.pop("chunk_size")) presigned_urls: List[str] = list(header.values()) parts = [] for i, presigned_url in enumerate(presigned_urls): with FileSlice(filepath, seek_from=i * chunk_size, read_limit=chunk_size) as data: r = requests.put(presigned_url, data=data) r.raise_for_status() parts.append( { "etag": r.headers.get("etag"), "partNumber": i + 1, } ) # In order to support progress reporting while data is uploading / downloading, # the transfer process should post messages to stdout write_msg( { "event": "progress", "oid": oid, "bytesSoFar": (i + 1) * chunk_size, "bytesSinceLast": chunk_size, } ) # Not precise but that's ok. r = requests.post( completion_url, json={ "oid": oid, "parts": parts, }, ) r.raise_for_status() write_msg({"event": "complete", "oid": oid})
    [huggingface/transformers] src/transformers/commands/lfs.py
    def run(self): # Immediately after invoking a custom transfer process, git-lfs # sends initiation data to the process over stdin. # This tells the process useful information about the configuration. init_msg = json.loads(sys.stdin.readline().strip()) if not (init_msg.get("event") == "init" and init_msg.get("operation") == "upload"): write_msg({"error": {"code": 32, "message": "Wrong lfs init operation"}}) sys.exit(1) # The transfer process should use the information it needs from the # initiation structure, and also perform any one-off setup tasks it # needs to do. It should then respond on stdout with a simple empty # confirmation structure, as follows: write_msg({}) # After the initiation exchange, git-lfs will send any number of # transfer requests to the stdin of the transfer process, in a serial sequence. while True: msg = read_msg() if msg is None: # When all transfers have been processed, git-lfs will send # a terminate event to the stdin of the transfer process. # On receiving this message the transfer process should # clean up and terminate. No response is expected. sys.exit(0) oid = msg["oid"] filepath = msg["path"] completion_url = msg["action"]["href"] header = msg["action"]["header"] chunk_size = int(header.pop("chunk_size")) presigned_urls: List[str] = list(header.values()) parts = [] for i, presigned_url in enumerate(presigned_urls): with FileSlice(filepath, seek_from=i * chunk_size, read_limit=chunk_size) as data: r = requests.put(presigned_url, data=data) r.raise_for_status() parts.append( { "etag": r.headers.get("etag"), "partNumber": i + 1, } ) # In order to support progress reporting while data is uploading / downloading, # the transfer process should post messages to stdout write_msg( { "event": "progress", "oid": oid, "bytesSoFar": (i + 1) * chunk_size, "bytesSinceLast": chunk_size, } ) # Not precise but that's ok. r = requests.post( completion_url, json={ "oid": oid, "parts": parts, }, ) r.raise_for_status() write_msg({"event": "complete", "oid": oid})
    [huggingface/accelerate] src/accelerate/utils/deepspeed.py
    def set_stage_and_offload(self): # zero stage - this is done as early as possible, before model is created, to allow # ``is_deepspeed_zero3_enabled`` query and getting to the early deepspeed config object # during ``zero.Init()`` which needs to know the dtype, and some other hparams. self._stage = self.get_value("zero_optimization.stage", -1) # offload self._offload = False if self.is_zero2() or self.is_zero3(): offload_devices_valid = set(["cpu", "nvme"]) offload_devices = set( [ self.get_value("zero_optimization.offload_optimizer.device"), self.get_value("zero_optimization.offload_param.device"), ] ) if len(offload_devices & offload_devices_valid) > 0: self._offload = True
    [huggingface/transformers] src/transformers/commands/lfs.py
    LFS_MULTIPART_UPLOAD_COMMAND = "lfs-multipart-upload"
    [openaccess-ai-collective/axolotl] README.md

    Advanced Setup

    Environment

    Docker

    docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest

    Or run on the current files for development:

    docker compose up -d

    [!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide's section on Docker.

    <details> <summary>Docker advanced</summary>

    A more powerful Docker command to run would be this:

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-latest

    It additionally:

    • Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args.
    • Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args.
    • The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal.
    • The --privileged flag gives all capabilities to the container.
    • The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.

    More information on nvidia website

    </details>

    Conda/Pip venv

    1. Install python >=3.10

    2. Install pytorch stable https://pytorch.org/get-started/locally/

    3. Install Axolotl along with python dependencies

      pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'
    4. (Optional) Login to Huggingface to use gated models/datasets.

      huggingface-cli login

      Get the token at huggingface.co/settings/tokens

    Cloud GPU

    For cloud GPU providers that support docker images, use winglian/axolotl-cloud:main-latest

    Bare Metal Cloud GPU

    LambdaLabs
    <details> <summary>Click to Expand</summary>
    1. Install python
    sudo apt update sudo apt install -y python3.10 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1 sudo update-alternatives --config python # pick 3.10 if given option python -V # should be 3.10
    1. Install pip
    wget https://bootstrap.pypa.io/get-pip.py python get-pip.py
    1. Install Pytorch https://pytorch.org/get-started/locally/

    2. Follow instructions on quickstart.

    3. Run

    pip3 install protobuf==3.20.3 pip3 install -U --ignore-installed requests Pillow psutil scipy
    1. Set path
    export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
    </details>
    GCP
    <details> <summary>Click to Expand</summary>

    Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.

    Make sure to run the below to uninstall xla.

    pip uninstall -y torch_xla[tpu]
    </details>

    Windows

    Please use WSL or Docker!

    Mac

    Use the below instead of the install method in QuickStart.

    pip3 install -e '.'
    

    More info: mac.md

    Google Colab

    Please use this example notebook.

    Launching on public clouds via SkyPilot

    To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:

    pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds sky check

    Get the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:

    git clone https://github.com/skypilot-org/skypilot.git
    cd skypilot/llm/axolotl
    

    Use one command to launch:

    # On-demand HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN # Managed spot (auto-recovery on preemption) HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET

    Dataset

    Axolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.

    See these docs for more information on how to use different dataset formats.

    Config

    See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:

    • model

      base_model: ./llama-7b-hf # local or huggingface repo

      Note: The code will load the right architecture.

    • dataset

      datasets: # huggingface repo - path: vicgalle/alpaca-gpt4 type: alpaca # huggingface repo with specific configuration/subset - path: EleutherAI/pile name: enron_emails type: completion # format from earlier field: text # Optional[str] default: text, field to use for completion data # huggingface repo with multiple named configurations/subsets - path: bigcode/commitpackft name: - ruby - python - typescript type: ... # unimplemented custom format # fastchat conversation # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py - path: ... type: sharegpt conversation: chatml # default: vicuna_v1.1 # local - path: data.jsonl # or json ds_type: json # see other options below type: alpaca # dataset with splits, but no train split - path: knowrohit07/know_sql type: context_qa.load_v2 train_on_split: validation # loading from s3 or gcs # s3 creds will be loaded from the system default and gcs only supports public access - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs. ... # Loading Data From a Public URL # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly. - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP. ds_type: json # this is the default, see other options below.
    • loading

      load_in_4bit: true load_in_8bit: true bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically. fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32 tf32: true # require >=ampere bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision) float16: true # use instead of fp16 when you don't want AMP

      Note: Repo does not do 4-bit quantization.

    • lora

      adapter: lora # 'qlora' or leave blank for full finetune lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: - q_proj - v_proj

    All Config Options

    See these docs for all config options.

    Train

    Run

    accelerate launch -m axolotl.cli.train your_config.yml

    [!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml

    Preprocess dataset

    You can optionally pre-tokenize dataset with the following before finetuning. This is recommended for large datasets.

    • Set dataset_prepared_path: to a local folder for saving and loading pre-tokenized dataset.
    • (Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.
    • (Optional): Use --debug to see preprocessed examples.
    python -m axolotl.cli.preprocess your_config.yml

    Multi-GPU

    Below are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed is the recommended multi-GPU option currently because FSDP may experience loss instability.

    DeepSpeed

    Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU's VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated

    We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.

    deepspeed: deepspeed_configs/zero1.json
    accelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json
    FSDP
    • llama FSDP
    fsdp: - full_shard - auto_wrap fsdp_config: fsdp_offload_params: true fsdp_state_dict_type: FULL_STATE_DICT fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
    FSDP + QLoRA

    Axolotl supports training with FSDP and QLoRA, see these docs for more information.

    Weights & Biases Logging

    Make sure your WANDB_API_KEY environment variable is set (recommended) or you login to wandb with wandb login.

    • wandb options
    wandb_mode: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model:
    Special Tokens

    It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer's vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:

    special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" tokens: # these are delimiters - "<|im_start|>" - "<|im_end|>"

    When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer's vocabulary.

    Inference Playground

    Axolotl allows you to load your model in an interactive terminal playground for quick experimentation. The config file is the same config file used for training.

    Pass the appropriate flag to the inference command, depending upon what kind of model was trained:

    • Pretrained LORA:
      python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
    • Full weights finetune:
      python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
    • Full weights finetune w/ a prompt from a text file:
      cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \ --base_model="./completed-model" --prompter=None --load_in_8bit=True

    -- With gradio hosting

    python -m axolotl.cli.inference examples/your_config.yml --gradio

    Please use --sample_packing False if you have it on and receive the error similar to below:

    RuntimeError: stack expects each tensor to be equal size, but got [1, 32, 1, 128] at entry 0 and [1, 32, 8, 128] at entry 1

    Merge LORA to base

    The following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.

    python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"

    You may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with

    CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...

    although this will be very slow, and using the config options above are recommended instead.

    [huggingface/transformers] src/transformers/commands/lfs.py
    class LfsEnableCommand: def __init__(self, args): self.args = args def run(self): warnings.warn( "Managing repositories through transformers-cli is deprecated. Please use `huggingface-cli` instead." ) local_path = os.path.abspath(self.args.path) if not os.path.isdir(local_path): print("This does not look like a valid git repo.") exit(1) subprocess.run( "git config lfs.customtransfer.multipart.path transformers-cli".split(), check=True, cwd=local_path ) subprocess.run( f"git config lfs.customtransfer.multipart.args {LFS_MULTIPART_UPLOAD_COMMAND}".split(), check=True, cwd=local_path, ) print("Local repo set up for largefiles")
    [huggingface/transformers] src/transformers/commands/lfs.py
    def run(self): warnings.warn( "Managing repositories through transformers-cli is deprecated. Please use `huggingface-cli` instead." ) local_path = os.path.abspath(self.args.path) if not os.path.isdir(local_path): print("This does not look like a valid git repo.") exit(1) subprocess.run( "git config lfs.customtransfer.multipart.path transformers-cli".split(), check=True, cwd=local_path ) subprocess.run( f"git config lfs.customtransfer.multipart.args {LFS_MULTIPART_UPLOAD_COMMAND}".split(), check=True, cwd=local_path, ) print("Local repo set up for largefiles")
    [huggingface/transformers] src/transformers/commands/lfs.py
    class LfsCommands(BaseTransformersCLICommand): """ Implementation of a custom transfer agent for the transfer type "multipart" for git-lfs. This lets users upload large files >5GB 🔥. Spec for LFS custom transfer agent is: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md This introduces two commands to the CLI: 1. $ transformers-cli lfs-enable-largefiles This should be executed once for each model repo that contains a model file >5GB. It's documented in the error message you get if you just try to git push a 5GB file without having enabled it before. 2. $ transformers-cli lfs-multipart-upload This command is called by lfs directly and is not meant to be called by the user. """ @staticmethod def register_subcommand(parser: ArgumentParser): enable_parser = parser.add_parser( "lfs-enable-largefiles", help=( "Deprecated: use `huggingface-cli` instead. Configure your repository to enable upload of files > 5GB." ), ) enable_parser.add_argument("path", type=str, help="Local path to repository you want to configure.") enable_parser.set_defaults(func=lambda args: LfsEnableCommand(args)) upload_parser = parser.add_parser( LFS_MULTIPART_UPLOAD_COMMAND, help=( "Deprecated: use `huggingface-cli` instead. " "Command will get called by git-lfs, do not call it directly." ), ) upload_parser.set_defaults(func=lambda args: LfsUploadCommand(args))
    [huggingface/transformers] src/transformers/models/llava/modeling_llava.py
    logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "LlavaConfig"
    [huggingface/accelerate] src/accelerate/hooks.py
    def post_forward(self, module, output): if self.offload: for name, _ in named_module_tensors( module, include_buffers=self.offload_buffers, recurse=self.place_submodules, remove_non_persistent=True, ): set_module_tensor_to_device(module, name, "meta") if type(module).__name__ == "Linear8bitLt": module.state.SCB = None module.state.CxB = None # We may have loaded tied weights into self.tied_params_map (avoiding to load them several times in e.g. submodules): remove them from # this dictionary to allow the garbage collector to do its job. for value_pointer, device in self.tied_pointers_to_remove: del self.tied_params_map[value_pointer][device] self.tied_pointers_to_remove = set() if self.io_same_device and self.input_device is not None: output = send_to_device(output, self.input_device, skip_keys=self.skip_keys) return output
    [huggingface/accelerate] src/accelerate/utils/offload.py
    def save_offload_index(index, offload_folder): if index is None or len(index) == 0: # Nothing to save return offload_index_file = os.path.join(offload_folder, "index.json") if os.path.isfile(offload_index_file): with open(offload_index_file, encoding="utf-8") as f: current_index = json.load(f) else: current_index = {} current_index.update(index) with open(offload_index_file, "w", encoding="utf-8") as f: json.dump(current_index, f, indent=2)
    [openaccess-ai-collective/axolotl] README.md

    Quickstart âš¡

    Get started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.

    Requirements: Python >=3.10 and Pytorch >=2.1.1.

    git clone https://github.com/OpenAccess-AI-Collective/axolotl cd axolotl pip3 install packaging ninja pip3 install -e '.[flash-attn,deepspeed]'

    Usage

    # preprocess datasets - optional but recommended CUDA_VISIBLE_DEVICES="" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml # finetune lora accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml # inference accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \ --lora_model_dir="./lora-out" # gradio accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \ --lora_model_dir="./lora-out" --gradio # remote yaml files - the yaml config can be hosted on a public URL # Note: the yaml config must directly link to the **raw** yaml accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/examples/openllama-3b/lora.yml
    [openaccess-ai-collective/axolotl] README.md

    Config

    See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:

    • model

      base_model: ./llama-7b-hf # local or huggingface repo

      Note: The code will load the right architecture.

    • dataset

      datasets: # huggingface repo - path: vicgalle/alpaca-gpt4 type: alpaca # huggingface repo with specific configuration/subset - path: EleutherAI/pile name: enron_emails type: completion # format from earlier field: text # Optional[str] default: text, field to use for completion data # huggingface repo with multiple named configurations/subsets - path: bigcode/commitpackft name: - ruby - python - typescript type: ... # unimplemented custom format # fastchat conversation # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py - path: ... type: sharegpt conversation: chatml # default: vicuna_v1.1 # local - path: data.jsonl # or json ds_type: json # see other options below type: alpaca # dataset with splits, but no train split - path: knowrohit07/know_sql type: context_qa.load_v2 train_on_split: validation # loading from s3 or gcs # s3 creds will be loaded from the system default and gcs only supports public access - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs. ... # Loading Data From a Public URL # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly. - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP. ds_type: json # this is the default, see other options below.
    • loading

      load_in_4bit: true load_in_8bit: true bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically. fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32 tf32: true # require >=ampere bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision) float16: true # use instead of fp16 when you don't want AMP

      Note: Repo does not do 4-bit quantization.

    • lora

      adapter: lora # 'qlora' or leave blank for full finetune lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: - q_proj - v_proj

    All Config Options

    See these docs for all config options.

    [huggingface/accelerate] src/accelerate/utils/deepspeed.py
    def is_offload(self): return self._offload
    [huggingface/accelerate] tests/test_offload.py
    def test_offload_state_dict(self): model = ModelForTest() with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, model.state_dict()) index_file = os.path.join(tmp_dir, "index.json") assert os.path.isfile(index_file) # TODO: add tests on what is inside the index for key in ["linear1.weight", "linear1.bias", "linear2.weight", "linear2.bias"]: weight_file = os.path.join(tmp_dir, f"{key}.dat") assert os.path.isfile(weight_file) # TODO: add tests on the fact weights are properly loaded
    [openaccess-ai-collective/axolotl] examples/llama-2/README.md

    Overview

    This is an example of a llama-2 configuration for 7b and 13b. The yaml file contains configuration for the 7b variant, but you can just aswell use the same settings for 13b.

    The 7b variant fits on any 24GB VRAM GPU and will take up about 17 GB of VRAM during training if using qlora and 20 GB if using lora. On a RTX 4090 it trains 3 epochs of the default dataset in about 15 minutes.

    The 13b variant will fit if you change these settings to these values: gradient_accumulation_steps: 2 micro_batch_size: 1

    accelerate launch -m axolotl.cli.train examples/llama-2/qlora.yml

    or

    accelerate launch -m axolotl.cli.train examples/llama-2/lora.yml

    To launch a full finetuning with 16-bit precision:

    accelerate launch -m axolotl.cli.train examples/llama-2/fft_optimized.yml
    [openaccess-ai-collective/axolotl] examples/code-llama/README.md

    Overview

    This is an example of CodeLLaMA configuration for 7b, 13b and 34b.

    The 7b variant fits on any 24GB VRAM GPU and will take up about 17 GB of VRAM during training if using qlora and 20 GB if using lora. On a RTX 4090 it trains 3 epochs of the default dataset in about 15 minutes.

    The 13b variant will fit if you change these settings to these values: gradient_accumulation_steps: 2 micro_batch_size: 1

    The 34b variant does not fit on 24GB of VRAM - you will need something with +40 gb VRAM that also supports flash attention v2 - A6000 or A100 are good choices.

    accelerate launch scripts/finetune.py examples/code-llama/[MODEL_SIZE]/qlora.yml

    or

    accelerate launch scripts/finetune.py examples/code-llama/[MODEL_SIZE]/lora.yml
    [openaccess-ai-collective/axolotl] docs/debugging.qmd
    ---
    title: Debugging
    description: How to debug Axolotl
    ---
    
    
    This document provides some tips and tricks for debugging Axolotl.  It also provides an example configuration for debugging with VSCode.  A good debugging setup is essential to understanding how Axolotl code works behind the scenes.
    
    ## Table of Contents
    
    - [General Tips](#general-tips)
    - [Debugging with VSCode](#debugging-with-vscode)
        - [Background](#background)
        - [Configuration](#configuration)
        - [Customizing your debugger](#customizing-your-debugger)
        - [Video Tutorial](#video-tutorial)
    - [Debugging With Docker](#debugging-with-docker)
        - [Setup](#setup)
        - [Attach To Container](#attach-to-container)
        - [Video - Attaching To Docker On Remote Host](#video---attaching-to-docker-on-remote-host)
    
    ## General Tips
    
    While debugging it's helpful to simplify your test scenario as much as possible.  Here are some tips for doing so:
    
    > [!Important]
    > All of these tips are incorporated into the [example configuration](#configuration) for debugging with VSCode below.
    
    1. **Make sure you are using the latest version of axolotl**:  This project changes often and bugs get fixed fast.  Check your git branch and make sure you have pulled the latest changes from `main`.
    1. **Eliminate concurrency**: Restrict the number of processes to 1 for both training and data preprocessing:
        - Set `CUDA_VISIBLE_DEVICES` to a single GPU, ex: `export CUDA_VISIBLE_DEVICES=0`.
        - Set `dataset_processes: 1` in your axolotl config or run the training command with `--dataset_processes=1`.
    2. **Use a small dataset**: Construct or use a small dataset from HF Hub. When using a small dataset, you will often have to make sure `sample_packing: False` and `eval_sample_packing: False` to avoid errors.  If you are in a pinch and don't have time to construct a small dataset but want to use from the HF Hub, you can shard the data (this will still tokenize the entire dataset, but will only use a fraction of the data for training.  For example, to shard the dataset into 20 pieces, add the following to your axolotl config):
        ```yaml
        dataset:
            ...
            shards: 20
        ```
    3. **Use a small model**: A good example of a small model is [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
    4. **Minimize iteration time**: Make sure the training loop finishes as fast as possible, with these settings.
        - `micro_batch_size: 1`
        - `max_steps: 1`
        - `val_set_size: 0`
    5. **Clear Caches:** Axolotl caches certain steps and so does the underlying HuggingFace trainer.  You may want to clear some of these caches when debugging.
        - Data preprocessing: When debugging data preprocessing, which includes prompt template formation, you may want to delete the directory set in `dataset_prepared_path:` in your axolotl config.  If you didn't set this value, the default is `last_run_prepared`.
        - HF Hub: If you are debugging data preprocessing, you should clear the relevant HF cache [HuggingFace cache](https://huggingface.co/docs/datasets/cache), by deleting the appropriate `~/.cache/huggingface/datasets/...` folder(s).
        - **The recommended approach is to redirect all outputs and caches to a temporary folder and delete selected subfolders before each run.  This is demonstrated in the example configuration below.**
    
    
    ## Debugging with VSCode
    
    ### Background
    
    The below example shows how to configure VSCode to debug data preprocessing of the `sharegpt` format.  This is the format used when you have the following in your axolotl config:
    
    ```yaml
    datasets:
      - path: <path to your sharegpt formatted dataset> # example on HF Hub: philschmid/guanaco-sharegpt-style
        type: sharegpt
    

    [!Important] If you are already familiar with advanced VSCode debugging, you can skip the below explanation and look at the files .vscode/launch.json and .vscode/tasks.json for an example configuration.

    [!Tip] If you prefer to watch a video, rather than read, you can skip to the video tutorial below (but doing both is recommended).

    Setup

    Make sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project:

    pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'

    Remote Hosts

    If you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging.

    Configuration

    The easiest way to get started is to modify the .vscode/launch.json file in this project. This is just an example configuration, so you may need to modify or copy it to suit your needs.

    For example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_sharegpt.yml, you would use the below configuration[^1]. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.

    // .vscode/launch.json { "version": "0.2.0", "configurations": [ { "name": "Debug axolotl prompt - sharegpt", "type": "python", "module": "accelerate.commands.launch", "request": "launch", "args": [ "-m", "axolotl.cli.train", "dev_sharegpt.yml", // The flags below simplify debugging by overriding the axolotl config // with the debugging tips above. Modify as needed. "--dataset_processes=1", // limits data preprocessing to one process "--max_steps=1", // limits training to just one step "--batch_size=1", // minimizes batch size "--micro_batch_size=1", // minimizes batch size "--val_set_size=0", // disables validation "--sample_packing=False", // disables sample packing which is necessary for small datasets "--eval_sample_packing=False",// disables sample packing on eval set "--dataset_prepared_path=temp_debug/axolotl_outputs/data", // send data outputs to a temp folder "--output_dir=temp_debug/axolotl_outputs/model" // send model outputs to a temp folder ], "console": "integratedTerminal", // show output in the integrated terminal "cwd": "${workspaceFolder}/devtools", // set working directory to devtools from the root of the project "justMyCode": true, // step through only axolotl code "env": {"CUDA_VISIBLE_DEVICES": "0", // Since we aren't doing distributed training, we need to limit to one GPU "HF_HOME": "${workspaceFolder}/devtools/temp_debug/.hf-cache"}, // send HF cache to a temp folder "preLaunchTask": "cleanup-for-dataprep", // delete temp folders (see below) } ] }

    Additional notes about this configuration:

    • The argument justMyCode is set to true such that you step through only the axolotl code. If you want to step into dependencies, set this to false.
    • The preLaunchTask: cleanup-for-dataprep is defined in .vscode/tasks.json and is used to delete the following folders before debugging, which is essential to ensure that the data pre-processing code is run from scratch:
      • ./devtools/temp_debug/axolotl_outputs
      • ./devtools/temp_debug/.hf-cache/datasets

    [!Tip] You may not want to delete these folders. For example, if you are debugging model training instead of data pre-processing, you may NOT want to delete the cache or output folders. You may also need to add additional tasks to the tasks.json file depending on your use case.

    Below is the ./vscode/tasks.json file that defines the cleanup-for-dataprep task. This task is run before each debugging session when you use the above configuration. Note how there are two tasks that delete the two folders mentioned above. The third task cleanup-for-dataprep is a composite task that combines the two tasks. A composite task is necessary because VSCode does not allow you to specify multiple tasks in the preLaunchTask argument of the launch.json file.

    // .vscode/tasks.json // this file is used by launch.json { "version": "2.0.0", "tasks": [ // this task changes into the devtools directory and deletes the temp_debug/axolotl_outputs folder { "label": "delete-outputs", "type": "shell", "command": "rm -rf temp_debug/axolotl_outputs", "options":{ "cwd": "${workspaceFolder}/devtools"}, "problemMatcher": [] }, // this task changes into the devtools directory and deletes the `temp_debug/.hf-cache/datasets` folder { "label": "delete-temp-hf-dataset-cache", "type": "shell", "command": "rm -rf temp_debug/.hf-cache/datasets", "options":{ "cwd": "${workspaceFolder}/devtools"}, "problemMatcher": [] }, // this task combines the two tasks above { "label": "cleanup-for-dataprep", "dependsOn": ["delete-outputs", "delete-temp-hf-dataset-cache"], } ] }

    Customizing your debugger

    Your debugging use case may differ from the example above. The easiest thing to do is to put your own axolotl config in the devtools folder and modify the launch.json file to use your config. You may also want to modify the preLaunchTask to delete different folders or not delete anything at all.

    Video Tutorial

    The following video tutorial walks through the above configuration and demonstrates how to debug with VSCode, (click the image below to watch):

    <div style="text-align: center; line-height: 0;">

    <a href="https://youtu.be/xUUB11yeMmc" target="_blank" title="How to debug Axolotl (for fine tuning LLMs)"><img src="https://i.ytimg.com/vi/xUUB11yeMmc/maxresdefault.jpg" style="border-radius: 10px; display: block; margin: auto;" width="560" height="315" /></a>

    <figcaption style="font-size: smaller;"><a href="https://hamel.dev">Hamel Husain's</a> tutorial: <a href="https://www.youtube.com/watch?v=xUUB11yeMmc">Debugging Axolotl w/VSCode</a></figcaption> </div> <br>

    Debugging With Docker

    Using official Axolotl Docker images is a great way to debug your code, and is a very popular way to use Axolotl. Attaching VSCode to Docker takes a few more steps.

    Setup

    On the host that is running axolotl (ex: if you are using a remote host), clone the axolotl repo and change your current directory to the root:

    git clone https://github.com/OpenAccess-AI-Collective/axolotl cd axolotl

    [!Tip] If you already have axolotl cloned on your host, make sure you have the latest changes and change into the root of the project.

    Next, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:[^2]

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-py3.10-cu118-2.0.1

    [!Tip] To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl's Docker CI builds.

    You will now be in the container. Next, perform an editable install of Axolotl:

    pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'

    Attach To Container

    Next, if you are using a remote host, Remote into this host with VSCode. If you are using a local host, you can skip this step.

    Next, select Dev Containers: Attach to Running Container... using the command palette (CMD + SHIFT + P) in VSCode. You will be prompted to select a container to attach to. Select the container you just created. You will now be in the container with a working directory that is at the root of the project. Any changes you make to the code will be reflected both in the container and on the host.

    Now you are ready to debug as described above (see Debugging with VSCode).

    Video - Attaching To Docker On Remote Host

    Here is a short video that demonstrates how to attach to a Docker container on a remote host:

    <div style="text-align: center; line-height: 0;">

    <a href="https://youtu.be/0AuoR7QnHR0" target="_blank" title="Debugging Axolotl Part 2: Attaching to Docker on a Remote Host"><img src="https://i.ytimg.com/vi/0AuoR7QnHR0/hqdefault.jpg" style="border-radius: 10px; display: block; margin: auto;" width="560" height="315" /></a>

    <figcaption style="font-size: smaller;"><a href="https://hamel.dev">Hamel Husain's</a> tutorial: <a href="https://youtu.be/0AuoR7QnHR0">Debugging Axolotl Part 2: Attaching to Docker on a Remote Host </a></figcaption> </div> <br>

    [^1]: The config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/sharegpt.yml, but this is the same thing.

    [^2]: Many of the below flags are recommended best practices by Nvidia when using nvidia-container-toolkit. You can read more about these flags here.

    [openaccess-ai-collective/axolotl] docs/fsdp_qlora.qmd
    ---
    title: "FDSP + QLoRA"
    description: Use FSDP with QLoRA to fine-tune large LLMs on consumer GPUs.
    format:
      html:
        toc: true
    ---
    
    ## Background
    
    Using FSDP with QLoRA is essential for **fine-tuning larger (70b+ parameter) LLMs on consumer GPUs.**  For example, you can use FSDP + QLoRA to train a 70b model on two 24GB GPUs[^1].
    
    Below, we describe how to use this feature in Axolotl.
    
    ## Usage
    
    To enable `QLoRA` with `FSDP`, you need to perform the following steps:
    
    > ![Tip]
    > See the [example config](#example-config) file in addition to reading these instructions.
    
    1. Set `adapter: qlora` in your axolotl config file.
    2. Enable FSDP in your axolotl config, as [described here](https://github.com/OpenAccess-AI-Collective/axolotl?tab=readme-ov-file#fsdp).
    3. Use one of the supported model types: `llama`, `mistral` or `mixtral`.
    
    ## Example Config
    
    [examples/llama-2/qlora-fsdp.yml](../examples/llama-2/qlora-fsdp.yml) contains an example of how to enable QLoRA + FSDP in axolotl.
    
    ## References
    
    - [PR #1378](https://github.com/OpenAccess-AI-Collective/axolotl/pull/1378) enabling QLoRA in FSDP in Axolotl.
    - [Blog Post](https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html) from the [Answer.AI](https://www.answer.ai/) team describing the work that enabled QLoRA in FSDP.
    - Related HuggingFace PRs Enabling FDSP + QLoRA:
        - Accelerate [PR#2544](https://github.com/huggingface/accelerate/pull/2544 )
        - Transformers [PR#29587](https://github.com/huggingface/transformers/pull/29587)
        - TRL [PR#1416](https://github.com/huggingface/trl/pull/1416)
        - PEFT [PR#1550](https://github.com/huggingface/peft/pull/1550)
    
    
    
    
    [^1]: This was enabled by [this work](https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html) from the Answer.AI team.
    
    
OpenAccess-AI-Collective/axolotl
huggingface/transformers
huggingface/peft
huggingface/accelerate