Phorm.ai
OpenAccess-AI-Collective/axolotl

    how to use huggingface-cli login with jupyter notebook

    To use huggingface-cli login within a Jupyter Notebook, you can execute shell commands directly from the notebook cells by prefixing the command with an exclamation mark (!). However, since huggingface-cli login prompts for the user's token interactively, you'll need to use an alternative approach to provide the token, as Jupyter Notebook cells do not support interactive input in the same way a terminal does.

    One way to handle this is by setting the Hugging Face token as an environment variable directly in the notebook or by using the notebook_login function from the huggingface_hub library, which is designed for use in Jupyter Notebooks and similar environments.

    Here's how you can do it using both methods:

    Method 1: Using notebook_login function

    First, ensure you have huggingface_hub installed. If not, you can install it using pip:

    !pip install huggingface_hub

    Then, use the notebook_login function:

    from huggingface_hub import notebook_login notebook_login()

    After executing the cell, you will be prompted to enter your Hugging Face token in the generated input field. This method securely handles the token without exposing it in the notebook.

    Method 2: Setting the Token as an Environment Variable

    Alternatively, you can set the token as an environment variable directly in the notebook. This method is less secure since it involves placing your token in the notebook, albeit temporarily.

    First, obtain your Hugging Face token from the Hugging Face website (you can find it in your account settings under Access Tokens).

    Then, set the token as an environment variable in a notebook cell:

    import os # Replace YOUR_TOKEN_HERE with your actual Hugging Face token os.environ["HF_HOME"] = "/root/.cache/huggingface" os.environ["TRANSFORMERS_CACHE"] = "/root/.cache/huggingface/transformers" os.environ["HF_DATASETS_CACHE"] = "/root/.cache/huggingface/datasets" os.environ["HF_METRICS_CACHE"] = "/root/.cache/huggingface/metrics" os.environ["HF_HUB_CACHE"] = "/root/.cache/huggingface/hub" os.environ["HF_HUB_TOKEN"] = "YOUR_TOKEN_HERE"

    Important: Be cautious with this method, especially if you're working in a shared environment or planning to share your notebook. It's recommended to clear the cell output or remove the token from the notebook once the login is no longer needed.

    Both methods will authenticate your session, allowing you to interact with the Hugging Face Hub, download models, and perform other actions that require authentication.

    Search results
    [huggingface/transformers] notebooks/README.md
    <!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/transformers] src/transformers/commands/user.py
    def run(self): print( ANSI.red( "ERROR! `huggingface-cli login` uses an outdated login mechanism " "that is not compatible with the Hugging Face Hub backend anymore. " "Please use `huggingface-cli login instead." ) )
    [huggingface/transformers] docs/README.md
    <!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/accelerate] examples/README.md
    <!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/transformers] src/transformers/commands/user.py
    class LoginCommand(BaseUserCommand): def run(self): print( ANSI.red( "ERROR! `huggingface-cli login` uses an outdated login mechanism " "that is not compatible with the Hugging Face Hub backend anymore. " "Please use `huggingface-cli login instead." ) )
    [huggingface/transformers] examples/README.md
    <!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/peft] docs/README.md
    <!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/accelerate] docs/README.md
    <!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/peft] examples/lora_dreambooth/colab_notebook.ipynb

    !git clone https://huggingface.co/spaces/smangrul/peft-lora-sd-dreambooth

    %cd "peft-lora-sd-dreambooth" !pip install -r requirements.txt

    !python colab.py

    [huggingface/transformers] examples/research_projects/README.md
    <!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/transformers] examples/legacy/README.md
    <!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/transformers] examples/research_projects/robust-speech-event/README.md

    Setting up an AI notebook

    1. Go to the Public Cloud page and select Project Management -> Users & Roles from the menu on the left.
    2. Click + Add user. Write a user description (e.g. AI Trainer), and select an AI Training Operator user role. Click Confirm.
    3. Write down the username and password (at the top of the screen) somewhere. They will be needed during step 7.
    4. Select AI & Machine Learning -> AI Training from the menu on the left. Click + Launch a new job on the AI Training page.
    5. On the Launch a new job page:
      • In 1. Choose a region select a region closest to you.
      • In 2. Enter the Docker image select Custom image -> baaastijn/ovh_huggingface.
      • You can skip steps 3. and 4. if you will be using the Hugging Face Hub to store the models after training.
      • In 5. Configure your job select 1 GPU.
      • Validate the info and Create the job.
    6. On the AI Training Jobs screen wait until the job's status changes from Pending to Running.
    7. Click HTTP Access from the Job's details page and log in with the AI training user you've created earlier. Once logged in, you can close the page and click HTTP Access to launch a JupyterLab notebook.
    8. Awesome, now you have a free GPU-enabled Jupyter instance!

    Note: If you're an experienced Docker user, feel free to create a custom docker image with all of the needed packages like the one in step 5. The Dockerfile for it is available here: baaastijn/Dockerimages. Once you've built your image, push it to https://hub.docker.com/ and select it during the OVHcloud job creation.

    For more quick tutorials about OVHcloud AI products, check out the showcase https://vimeo.com/showcase/8903300

    [huggingface/transformers] src/transformers/commands/user.py
    def register_subcommand(parser: ArgumentParser): login_parser = parser.add_parser("login", help="Log in using the same credentials as on huggingface.co") login_parser.set_defaults(func=lambda args: LoginCommand(args)) whoami_parser = parser.add_parser("whoami", help="Find out which huggingface.co account you are logged in as.") whoami_parser.set_defaults(func=lambda args: WhoamiCommand(args)) logout_parser = parser.add_parser("logout", help="Log out") logout_parser.set_defaults(func=lambda args: LogoutCommand(args)) # new system: git-based repo system repo_parser = parser.add_parser( "repo", help="Deprecated: use `huggingface-cli` instead. Commands to interact with your huggingface.co repos.", ) repo_subparsers = repo_parser.add_subparsers( help="Deprecated: use `huggingface-cli` instead. huggingface.co repos related commands" ) repo_create_parser = repo_subparsers.add_parser( "create", help="Deprecated: use `huggingface-cli` instead. Create a new repo on huggingface.co" ) repo_create_parser.add_argument( "name", type=str, help="Name for your model's repo. Will be namespaced under your username to build the model id.", ) repo_create_parser.add_argument("--organization", type=str, help="Optional: organization namespace.") repo_create_parser.add_argument("-y", "--yes", action="store_true", help="Optional: answer Yes to the prompt") repo_create_parser.set_defaults(func=lambda args: RepoCreateCommand(args))
    [huggingface/accelerate] docker/README.md
    <!--- Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/accelerate] CONTRIBUTING.md
    <!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
    [huggingface/accelerate] docs/source/basic_tutorials/notebook.md
    <!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
    [huggingface/accelerate] README.md
    <!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/accelerate_logo.png" width="400"/> <br> <p> <p align="center"> <!-- Uncomment when CircleCI is set up <a href="https://circleci.com/gh/huggingface/accelerate"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"></a> --> <a href="https://github.com/huggingface/accelerate/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue"></a> <a href="https://huggingface.co/docs/accelerate/index.html"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online"></a> <a href="https://github.com/huggingface/accelerate/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg"></a> <a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a> </p> <h3 align="center"> <p>Run your *raw* PyTorch training script on any kind of device </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/course_banner.png"></a> </h3>
    [huggingface/accelerate] src/accelerate/commands/launch.py
    def sagemaker_launcher(sagemaker_config: SageMakerConfig, args): if not is_sagemaker_available(): raise ImportError( "Please install sagemaker to be able to launch training on Amazon SageMaker with `pip install accelerate[sagemaker]`" ) if args.module or args.no_python: raise ValueError( "SageMaker requires a python training script file and cannot be used with --module or --no_python" ) from sagemaker.huggingface import HuggingFace args, sagemaker_inputs = prepare_sagemager_args_inputs(sagemaker_config, args) huggingface_estimator = HuggingFace(**args) huggingface_estimator.fit(inputs=sagemaker_inputs) print(f"You can find your model data at: {huggingface_estimator.model_data}")
    [huggingface/peft] examples/sft/requirements_colab.txt
    git+https://github.com/huggingface/transformers git+https://github.com/huggingface/accelerate git+https://github.com/huggingface/peft git+https://github.com/huggingface/trl unsloth[colab_ampere] @ git+https://github.com/unslothai/unsloth.git datasets deepspeed PyGithub flash-attn huggingface-hub evaluate bitsandbytes einops wandb tensorboard tiktoken pandas numpy scipy matplotlib sentencepiece nltk xformers git+https://github.com/huggingface/datatrove.git hf_transfer
    [huggingface/peft] examples/hra_dreambooth/README.md
    <!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
    [huggingface/peft] examples/conditional_generation/peft_lora_seq2seq_accelerate_big_model_inference.ipynb

    from transformers import AutoModelForSeq2SeqLM from peft import PeftModel, PeftConfig import torch from datasets import load_dataset import os from transformers import AutoTokenizer from torch.utils.data import DataLoader from transformers import default_data_collator, get_linear_schedule_with_warmup from tqdm import tqdm from datasets import load_dataset

    dataset_name = "twitter_complaints" text_column = "Tweet text" label_column = "text_label" batch_size = 8

    peft_model_id = "smangrul/twitter_complaints_bigscience_T0_3B_LORA_SEQ_2_SEQ_LM" config = PeftConfig.from_pretrained(peft_model_id)

    peft_model_id = "smangrul/twitter_complaints_bigscience_T0_3B_LORA_SEQ_2_SEQ_LM" max_memory = {0: "6GIB", 1: "0GIB", 2: "0GIB", 3: "0GIB", 4: "0GIB", "cpu": "30GB"} config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, device_map="auto", max_memory=max_memory) model = PeftModel.from_pretrained(model, peft_model_id, device_map="auto", max_memory=max_memory)

    from datasets import load_dataset

    dataset = load_dataset("ought/raft", dataset_name)

    classes = [k.replace("_", " ") for k in dataset["train"].features["Label"].names] print(classes) dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["Label"]]}, batched=True, num_proc=1, ) print(dataset) dataset["train"][0]

    tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes])

    def preprocess_function(examples): inputs = examples[text_column] targets = examples[label_column] model_inputs = tokenizer(inputs, truncation=True) labels = tokenizer( targets, max_length=target_max_length, padding="max_length", truncation=True, return_tensors="pt" ) labels = labels["input_ids"] labels[labels == tokenizer.pad_token_id] = -100 model_inputs["labels"] = labels return model_inputs

    processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=True, desc="Running tokenizer on dataset", )

    train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["train"] test_dataset = processed_datasets["test"]

    def collate_fn(examples): return tokenizer.pad(examples, padding="longest", return_tensors="pt")

    train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=batch_size, pin_memory=True ) eval_dataloader = DataLoader(eval_dataset, collate_fn=collate_fn, batch_size=batch_size, pin_memory=True) test_dataloader = DataLoader(test_dataset, collate_fn=collate_fn, batch_size=batch_size, pin_memory=True)

    model.eval() i = 15 inputs = tokenizer(f'{text_column} : {dataset["test"][i]["Tweet text"]} Label : ', return_tensors="pt") print(dataset["test"][i]["Tweet text"]) print(inputs)

    with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=10) print(outputs) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))

    model.eval() eval_preds = [] for _, batch in enumerate(tqdm(eval_dataloader)): batch = {k: v.to("cuda") for k, v in batch.items() if k != "labels"} with torch.no_grad(): outputs = model.generate(**batch, max_new_tokens=10) preds = outputs.detach().cpu().numpy() eval_preds.extend(tokenizer.batch_decode(preds, skip_special_tokens=True))

    correct = 0 total = 0 for pred, true in zip(eval_preds, dataset["train"][label_column]): if pred.strip() == true.strip(): correct += 1 total += 1 accuracy = correct / total * 100 print(f"{accuracy=}") print(f"{eval_preds[:10]=}") print(f"{dataset['train'][label_column][:10]=}")

    model.eval() test_preds = []

    for _, batch in enumerate(tqdm(test_dataloader)): batch = {k: v for k, v in batch.items() if k != "labels"} with torch.no_grad(): outputs = model.generate(**batch, max_new_tokens=10) preds = outputs.detach().cpu().numpy() test_preds.extend(tokenizer.batch_decode(preds, skip_special_tokens=True)) if len(test_preds) > 100: break test_preds

    [openaccess-ai-collective/axolotl] src/axolotl/cli/__init__.py
    def check_user_token(): # Skip check if HF_HUB_OFFLINE is set to True if os.getenv("HF_HUB_OFFLINE") == "1": LOG.info( "Skipping HuggingFace token verification because HF_HUB_OFFLINE is set to True. Only local files will be used." ) return True # Verify if token is valid api = HfApi() try: user_info = api.whoami() return bool(user_info) except LocalTokenNotFoundError: LOG.warning( "Error verifying HuggingFace token. Remember to log in using `huggingface-cli login` and get your access token from https://huggingface.co/settings/tokens if you want to use gated models or datasets." ) return False
    [huggingface/peft] examples/sft/requirements.txt
    git+https://github.com/huggingface/transformers git+https://github.com/huggingface/accelerate git+https://github.com/huggingface/peft git+https://github.com/huggingface/trl git+https://github.com/huggingface/datatrove.git unsloth[conda]@git+https://github.com/unslothai/unsloth.git deepspeed PyGithub flash-attn huggingface-hub evaluate datasets bitsandbytes einops wandb tensorboard tiktoken pandas numpy scipy matplotlib sentencepiece nltk xformers hf_transfer
    [huggingface/accelerate] docs/source/package_reference/cli.md
    <!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
    [huggingface/accelerate] examples/requirements.txt
    accelerate # used to be installed in Amazon SageMaker environment evaluate datasets==2.3.2 schedulefree huggingface_hub>=0.20.0
    [openaccess-ai-collective/axolotl] examples/colab-notebooks/colab-axolotl-example.ipynb
    # Example notebook for running Axolotl on google colab
    

    import torch

    Check so there is a gpu available, a T4(free tier) is enough to run this notebook

    assert (torch.cuda.is_available()==True)

    ## Install Axolotl and dependencies
    

    !pip install torch=="2.1.2" !pip install -e git+https://github.com/OpenAccess-AI-Collective/axolotl#egg=axolotl !pip install flash-attn=="2.5.0" !pip install deepspeed=="0.13.1"!pip install mlflow=="2.13.0"

    ## Create an yaml config file
    

    import yaml

    Your YAML string

    yaml_string = """ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer

    load_in_8bit: false load_in_4bit: true strict: false

    datasets:

    • path: mhenrichsen/alpaca_2k_test type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/qlora-out

    adapter: qlora lora_model_dir:

    sequence_len: 4096 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true

    lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out:

    wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model:

    gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002

    train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false

    gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true

    warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens:

    """

    Convert the YAML string to a Python dictionary

    yaml_dict = yaml.safe_load(yaml_string)

    Specify your file path

    file_path = 'test_axolotl.yaml'

    Write the YAML file

    with open(file_path, 'w') as file: yaml.dump(yaml_dict, file)

    ## Launch the training
    

    Buy using the ! the comand will be executed as a bash command

    !accelerate launch -m axolotl.cli.train /content/test_axolotl.yaml

    ## Play with inference
    

    Buy using the ! the comand will be executed as a bash command

    !accelerate launch -m axolotl.cli.inference /content/test_axolotl.yaml
    --qlora_model_dir="./qlora-out" --gradio

    [huggingface/accelerate] docs/source/basic_tutorials/launch.md
    <!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
    [openaccess-ai-collective/axolotl] README.md

    Environment

    Docker

    docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest

    Or run on the current files for development:

    docker compose up -d

    [!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide's section on Docker.

    <details> <summary>Docker advanced</summary>

    A more powerful Docker command to run would be this:

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-latest

    It additionally:

    • Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args.
    • Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args.
    • The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal.
    • The --privileged flag gives all capabilities to the container.
    • The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.

    More information on nvidia website

    </details>

    Conda/Pip venv

    1. Install python >=3.10

    2. Install pytorch stable https://pytorch.org/get-started/locally/

    3. Install Axolotl along with python dependencies

      pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'
    4. (Optional) Login to Huggingface to use gated models/datasets.

      huggingface-cli login

      Get the token at huggingface.co/settings/tokens

    Cloud GPU

    For cloud GPU providers that support docker images, use winglian/axolotl-cloud:main-latest

    Bare Metal Cloud GPU

    LambdaLabs
    <details> <summary>Click to Expand</summary>
    1. Install python
    sudo apt update sudo apt install -y python3.10 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1 sudo update-alternatives --config python # pick 3.10 if given option python -V # should be 3.10
    1. Install pip
    wget https://bootstrap.pypa.io/get-pip.py python get-pip.py
    1. Install Pytorch https://pytorch.org/get-started/locally/

    2. Follow instructions on quickstart.

    3. Run

    pip3 install protobuf==3.20.3 pip3 install -U --ignore-installed requests Pillow psutil scipy
    1. Set path
    export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
    </details>
    GCP
    <details> <summary>Click to Expand</summary>

    Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.

    Make sure to run the below to uninstall xla.

    pip uninstall -y torch_xla[tpu]
    </details>

    Windows

    Please use WSL or Docker!

    Mac

    Use the below instead of the install method in QuickStart.

    pip3 install -e '.'
    

    More info: mac.md

    Google Colab

    Please use this example notebook.

    Launching on public clouds via SkyPilot

    To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:

    pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds sky check

    Get the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:

    git clone https://github.com/skypilot-org/skypilot.git
    cd skypilot/llm/axolotl
    

    Use one command to launch:

    # On-demand HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN # Managed spot (auto-recovery on preemption) HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET

    Launching on public clouds via dstack

    To launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.

    Write a job description in YAML as below:

    # dstack.yaml type: task image: winglian/axolotl-cloud:main-20240429-py3.11-cu121-2.2.2 env: - HUGGING_FACE_HUB_TOKEN - WANDB_API_KEY commands: - accelerate launch -m axolotl.cli.train config.yaml ports: - 6006 resources: gpu: memory: 24GB.. count: 2

    then, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:

    pip install dstack HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot

    For further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.

    [openaccess-ai-collective/axolotl] docs/debugging.qmd
    ---
    title: Debugging
    description: How to debug Axolotl
    ---
    
    
    This document provides some tips and tricks for debugging Axolotl.  It also provides an example configuration for debugging with VSCode.  A good debugging setup is essential to understanding how Axolotl code works behind the scenes.
    
    ## Table of Contents
    
    - [General Tips](#general-tips)
    - [Debugging with VSCode](#debugging-with-vscode)
        - [Background](#background)
        - [Configuration](#configuration)
        - [Customizing your debugger](#customizing-your-debugger)
        - [Video Tutorial](#video-tutorial)
    - [Debugging With Docker](#debugging-with-docker)
        - [Setup](#setup)
        - [Attach To Container](#attach-to-container)
        - [Video - Attaching To Docker On Remote Host](#video---attaching-to-docker-on-remote-host)
    
    ## General Tips
    
    While debugging it's helpful to simplify your test scenario as much as possible.  Here are some tips for doing so:
    
    > [!Important]
    > All of these tips are incorporated into the [example configuration](#configuration) for debugging with VSCode below.
    
    1. **Make sure you are using the latest version of axolotl**:  This project changes often and bugs get fixed fast.  Check your git branch and make sure you have pulled the latest changes from `main`.
    1. **Eliminate concurrency**: Restrict the number of processes to 1 for both training and data preprocessing:
        - Set `CUDA_VISIBLE_DEVICES` to a single GPU, ex: `export CUDA_VISIBLE_DEVICES=0`.
        - Set `dataset_processes: 1` in your axolotl config or run the training command with `--dataset_processes=1`.
    2. **Use a small dataset**: Construct or use a small dataset from HF Hub. When using a small dataset, you will often have to make sure `sample_packing: False` and `eval_sample_packing: False` to avoid errors.  If you are in a pinch and don't have time to construct a small dataset but want to use from the HF Hub, you can shard the data (this will still tokenize the entire dataset, but will only use a fraction of the data for training.  For example, to shard the dataset into 20 pieces, add the following to your axolotl config):
        ```yaml
        dataset:
            ...
            shards: 20
        ```
    3. **Use a small model**: A good example of a small model is [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
    4. **Minimize iteration time**: Make sure the training loop finishes as fast as possible, with these settings.
        - `micro_batch_size: 1`
        - `max_steps: 1`
        - `val_set_size: 0`
    5. **Clear Caches:** Axolotl caches certain steps and so does the underlying HuggingFace trainer.  You may want to clear some of these caches when debugging.
        - Data preprocessing: When debugging data preprocessing, which includes prompt template formation, you may want to delete the directory set in `dataset_prepared_path:` in your axolotl config.  If you didn't set this value, the default is `last_run_prepared`.
        - HF Hub: If you are debugging data preprocessing, you should clear the relevant HF cache [HuggingFace cache](https://huggingface.co/docs/datasets/cache), by deleting the appropriate `~/.cache/huggingface/datasets/...` folder(s).
        - **The recommended approach is to redirect all outputs and caches to a temporary folder and delete selected subfolders before each run.  This is demonstrated in the example configuration below.**
    
    
    ## Debugging with VSCode
    
    ### Background
    
    The below example shows how to configure VSCode to debug data preprocessing of the `sharegpt` format.  This is the format used when you have the following in your axolotl config:
    
    ```yaml
    datasets:
      - path: <path to your sharegpt formatted dataset> # example on HF Hub: philschmid/guanaco-sharegpt-style
        type: sharegpt
    

    [!Important] If you are already familiar with advanced VSCode debugging, you can skip the below explanation and look at the files .vscode/launch.json and .vscode/tasks.json for an example configuration.

    [!Tip] If you prefer to watch a video, rather than read, you can skip to the video tutorial below (but doing both is recommended).

    Setup

    Make sure you have an editable install of Axolotl, which ensures that changes you make to the code are reflected at runtime. Run the following commands from the root of this project:

    pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'

    Remote Hosts

    If you developing on a remote host, you can easily use VSCode to debug remotely. To do so, you will need to follow this remote - SSH guide. You can also see the video below on Docker and Remote SSH debugging.

    Configuration

    The easiest way to get started is to modify the .vscode/launch.json file in this project. This is just an example configuration, so you may need to modify or copy it to suit your needs.

    For example, to mimic the command cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_sharegpt.yml, you would use the below configuration[^1]. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to devtools and set the env variable HF_HOME to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch.

    // .vscode/launch.json { "version": "0.2.0", "configurations": [ { "name": "Debug axolotl prompt - sharegpt", "type": "python", "module": "accelerate.commands.launch", "request": "launch", "args": [ "-m", "axolotl.cli.train", "dev_sharegpt.yml", // The flags below simplify debugging by overriding the axolotl config // with the debugging tips above. Modify as needed. "--dataset_processes=1", // limits data preprocessing to one process "--max_steps=1", // limits training to just one step "--batch_size=1", // minimizes batch size "--micro_batch_size=1", // minimizes batch size "--val_set_size=0", // disables validation "--sample_packing=False", // disables sample packing which is necessary for small datasets "--eval_sample_packing=False",// disables sample packing on eval set "--dataset_prepared_path=temp_debug/axolotl_outputs/data", // send data outputs to a temp folder "--output_dir=temp_debug/axolotl_outputs/model" // send model outputs to a temp folder ], "console": "integratedTerminal", // show output in the integrated terminal "cwd": "${workspaceFolder}/devtools", // set working directory to devtools from the root of the project "justMyCode": true, // step through only axolotl code "env": {"CUDA_VISIBLE_DEVICES": "0", // Since we aren't doing distributed training, we need to limit to one GPU "HF_HOME": "${workspaceFolder}/devtools/temp_debug/.hf-cache"}, // send HF cache to a temp folder "preLaunchTask": "cleanup-for-dataprep", // delete temp folders (see below) } ] }

    Additional notes about this configuration:

    • The argument justMyCode is set to true such that you step through only the axolotl code. If you want to step into dependencies, set this to false.
    • The preLaunchTask: cleanup-for-dataprep is defined in .vscode/tasks.json and is used to delete the following folders before debugging, which is essential to ensure that the data pre-processing code is run from scratch:
      • ./devtools/temp_debug/axolotl_outputs
      • ./devtools/temp_debug/.hf-cache/datasets

    [!Tip] You may not want to delete these folders. For example, if you are debugging model training instead of data pre-processing, you may NOT want to delete the cache or output folders. You may also need to add additional tasks to the tasks.json file depending on your use case.

    Below is the ./vscode/tasks.json file that defines the cleanup-for-dataprep task. This task is run before each debugging session when you use the above configuration. Note how there are two tasks that delete the two folders mentioned above. The third task cleanup-for-dataprep is a composite task that combines the two tasks. A composite task is necessary because VSCode does not allow you to specify multiple tasks in the preLaunchTask argument of the launch.json file.

    // .vscode/tasks.json // this file is used by launch.json { "version": "2.0.0", "tasks": [ // this task changes into the devtools directory and deletes the temp_debug/axolotl_outputs folder { "label": "delete-outputs", "type": "shell", "command": "rm -rf temp_debug/axolotl_outputs", "options":{ "cwd": "${workspaceFolder}/devtools"}, "problemMatcher": [] }, // this task changes into the devtools directory and deletes the `temp_debug/.hf-cache/datasets` folder { "label": "delete-temp-hf-dataset-cache", "type": "shell", "command": "rm -rf temp_debug/.hf-cache/datasets", "options":{ "cwd": "${workspaceFolder}/devtools"}, "problemMatcher": [] }, // this task combines the two tasks above { "label": "cleanup-for-dataprep", "dependsOn": ["delete-outputs", "delete-temp-hf-dataset-cache"], } ] }

    Customizing your debugger

    Your debugging use case may differ from the example above. The easiest thing to do is to put your own axolotl config in the devtools folder and modify the launch.json file to use your config. You may also want to modify the preLaunchTask to delete different folders or not delete anything at all.

    Video Tutorial

    The following video tutorial walks through the above configuration and demonstrates how to debug with VSCode, (click the image below to watch):

    <div style="text-align: center; line-height: 0;">

    <a href="https://youtu.be/xUUB11yeMmc" target="_blank" title="How to debug Axolotl (for fine tuning LLMs)"><img src="https://i.ytimg.com/vi/xUUB11yeMmc/maxresdefault.jpg" style="border-radius: 10px; display: block; margin: auto;" width="560" height="315" /></a>

    <figcaption style="font-size: smaller;"><a href="https://hamel.dev">Hamel Husain's</a> tutorial: <a href="https://www.youtube.com/watch?v=xUUB11yeMmc">Debugging Axolotl w/VSCode</a></figcaption> </div> <br>

    Debugging With Docker

    Using official Axolotl Docker images is a great way to debug your code, and is a very popular way to use Axolotl. Attaching VSCode to Docker takes a few more steps.

    Setup

    On the host that is running axolotl (ex: if you are using a remote host), clone the axolotl repo and change your current directory to the root:

    git clone https://github.com/OpenAccess-AI-Collective/axolotl cd axolotl

    [!Tip] If you already have axolotl cloned on your host, make sure you have the latest changes and change into the root of the project.

    Next, run the desired docker image and mount the current directory. Below is a docker command you can run to do this:[^2]

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-py3.10-cu118-2.0.1

    [!Tip] To understand which containers are available, see the Docker section of the README and the DockerHub repo. For details of how the Docker containers are built, see axolotl's Docker CI builds.

    You will now be in the container. Next, perform an editable install of Axolotl:

    pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'

    Attach To Container

    Next, if you are using a remote host, Remote into this host with VSCode. If you are using a local host, you can skip this step.

    Next, select Dev Containers: Attach to Running Container... using the command palette (CMD + SHIFT + P) in VSCode. You will be prompted to select a container to attach to. Select the container you just created. You will now be in the container with a working directory that is at the root of the project. Any changes you make to the code will be reflected both in the container and on the host.

    Now you are ready to debug as described above (see Debugging with VSCode).

    Video - Attaching To Docker On Remote Host

    Here is a short video that demonstrates how to attach to a Docker container on a remote host:

    <div style="text-align: center; line-height: 0;">

    <a href="https://youtu.be/0AuoR7QnHR0" target="_blank" title="Debugging Axolotl Part 2: Attaching to Docker on a Remote Host"><img src="https://i.ytimg.com/vi/0AuoR7QnHR0/hqdefault.jpg" style="border-radius: 10px; display: block; margin: auto;" width="560" height="315" /></a>

    <figcaption style="font-size: smaller;"><a href="https://hamel.dev">Hamel Husain's</a> tutorial: <a href="https://youtu.be/0AuoR7QnHR0">Debugging Axolotl Part 2: Attaching to Docker on a Remote Host </a></figcaption> </div> <br>

    [^1]: The config actually mimics the command CUDA_VISIBLE_DEVICES=0 python -m accelerate.commands.launch -m axolotl.cli.train devtools/sharegpt.yml, but this is the same thing.

    [^2]: Many of the below flags are recommended best practices by Nvidia when using nvidia-container-toolkit. You can read more about these flags here.

    [openaccess-ai-collective/axolotl] cicd/Dockerfile.jinja
    FROM winglian/axolotl-base:{{ BASE_TAG }}
    
    ENV TORCH_CUDA_ARCH_LIST="7.0 7.5 8.0 8.6+PTX"
    ENV AXOLOTL_EXTRAS="{{ AXOLOTL_EXTRAS }}"
    ENV AXOLOTL_ARGS="{{ AXOLOTL_ARGS }}"
    ENV CUDA="{{ CUDA }}"
    ENV BNB_CUDA_VERSION="{{ CUDA }}"
    ENV PYTORCH_VERSION="{{ PYTORCH_VERSION }}"
    ENV GITHUB_REF="{{ GITHUB_REF }}"
    ENV GITHUB_SHA="{{ GITHUB_SHA }}"
    
    RUN apt-get update && \
        apt-get install -y --allow-change-held-packages vim curl nano libnccl2 libnccl-dev
    
    WORKDIR /workspace
    
    RUN git clone --depth=1 https://github.com/OpenAccess-AI-Collective/axolotl.git
    
    WORKDIR /workspace/axolotl
    
    RUN git fetch origin +$GITHUB_REF && \
        git checkout FETCH_HEAD
    
    # If AXOLOTL_EXTRAS is set, append it in brackets
    RUN pip install causal_conv1d
    RUN if [ "$AXOLOTL_EXTRAS" != "" ] ; then \
            pip install -e .[deepspeed,flash-attn,mamba-ssm,galore,$AXOLOTL_EXTRAS] $AXOLOTL_ARGS; \
        else \
            pip install -e .[deepspeed,flash-attn,mamba-ssm,galore] $AXOLOTL_ARGS; \
        fi
    
    # So we can test the Docker image
    RUN pip install pytest
    
    # fix so that git fetch/pull from remote works
    RUN git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*" && \
        git config --get remote.origin.fetch
    
    # helper for huggingface-login cli
    RUN git config --global credential.helper store
    
    
    [openaccess-ai-collective/axolotl] src/axolotl/cli/preprocess.py
    def do_cli(config: Union[Path, str] = Path("examples/"), **kwargs): # pylint: disable=duplicate-code print_axolotl_text_art() parsed_cfg = load_cfg(config, **kwargs) parsed_cfg.is_preprocess = True check_accelerate_default_config() check_user_token() parser = transformers.HfArgumentParser((PreprocessCliArgs)) parsed_cli_args, _ = parser.parse_args_into_dataclasses( return_remaining_strings=True ) if parsed_cfg.chat_template == "chatml": if parsed_cfg.default_system_message: LOG.info( f"ChatML set. Adding default system message: {parsed_cfg.default_system_message}" ) register_chatml_template(parsed_cfg.default_system_message) else: register_chatml_template() elif parsed_cfg.chat_template == "llama3": if parsed_cfg.default_system_message: LOG.info( f"LLaMA-3 set. Adding default system message: {parsed_cfg.default_system_message}" ) register_llama3_template(parsed_cfg.default_system_message) else: register_llama3_template() if not parsed_cfg.dataset_prepared_path: msg = ( Fore.RED + "preprocess CLI called without dataset_prepared_path set, " + f"using default path: {DEFAULT_DATASET_PREPARED_PATH}" + Fore.RESET ) LOG.warning(msg) parsed_cfg.dataset_prepared_path = DEFAULT_DATASET_PREPARED_PATH if parsed_cfg.rl: # and parsed_cfg.rl != "orpo": load_rl_datasets(cfg=parsed_cfg, cli_args=parsed_cli_args) else: load_datasets(cfg=parsed_cfg, cli_args=parsed_cli_args) if parsed_cli_args.download: model_name = parsed_cfg.base_model with init_empty_weights(): AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True) LOG.info( Fore.GREEN + f"Success! Preprocessed data path: `dataset_prepared_path: {parsed_cfg.dataset_prepared_path}`" + Fore.RESET )
    [openaccess-ai-collective/axolotl] README.md

    Advanced Setup

    Environment

    Docker

    docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest

    Or run on the current files for development:

    docker compose up -d

    [!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide's section on Docker.

    <details> <summary>Docker advanced</summary>

    A more powerful Docker command to run would be this:

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-latest

    It additionally:

    • Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args.
    • Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args.
    • The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal.
    • The --privileged flag gives all capabilities to the container.
    • The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.

    More information on nvidia website

    </details>

    Conda/Pip venv

    1. Install python >=3.10

    2. Install pytorch stable https://pytorch.org/get-started/locally/

    3. Install Axolotl along with python dependencies

      pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'
    4. (Optional) Login to Huggingface to use gated models/datasets.

      huggingface-cli login

      Get the token at huggingface.co/settings/tokens

    Cloud GPU

    For cloud GPU providers that support docker images, use winglian/axolotl-cloud:main-latest

    Bare Metal Cloud GPU

    LambdaLabs
    <details> <summary>Click to Expand</summary>
    1. Install python
    sudo apt update sudo apt install -y python3.10 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1 sudo update-alternatives --config python # pick 3.10 if given option python -V # should be 3.10
    1. Install pip
    wget https://bootstrap.pypa.io/get-pip.py python get-pip.py
    1. Install Pytorch https://pytorch.org/get-started/locally/

    2. Follow instructions on quickstart.

    3. Run

    pip3 install protobuf==3.20.3 pip3 install -U --ignore-installed requests Pillow psutil scipy
    1. Set path
    export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
    </details>
    GCP
    <details> <summary>Click to Expand</summary>

    Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.

    Make sure to run the below to uninstall xla.

    pip uninstall -y torch_xla[tpu]
    </details>

    Windows

    Please use WSL or Docker!

    Mac

    Use the below instead of the install method in QuickStart.

    pip3 install -e '.'
    

    More info: mac.md

    Google Colab

    Please use this example notebook.

    Launching on public clouds via SkyPilot

    To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:

    pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds sky check

    Get the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:

    git clone https://github.com/skypilot-org/skypilot.git
    cd skypilot/llm/axolotl
    

    Use one command to launch:

    # On-demand HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN # Managed spot (auto-recovery on preemption) HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET

    Launching on public clouds via dstack

    To launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.

    Write a job description in YAML as below:

    # dstack.yaml type: task image: winglian/axolotl-cloud:main-20240429-py3.11-cu121-2.2.2 env: - HUGGING_FACE_HUB_TOKEN - WANDB_API_KEY commands: - accelerate launch -m axolotl.cli.train config.yaml ports: - 6006 resources: gpu: memory: 24GB.. count: 2

    then, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:

    pip install dstack HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot

    For further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.

    Dataset

    Axolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.

    See these docs for more information on how to use different dataset formats.

    Config

    See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:

    • model

      base_model: ./llama-7b-hf # local or huggingface repo

      Note: The code will load the right architecture.

    • dataset

      datasets: # huggingface repo - path: vicgalle/alpaca-gpt4 type: alpaca # huggingface repo with specific configuration/subset - path: EleutherAI/pile name: enron_emails type: completion # format from earlier field: text # Optional[str] default: text, field to use for completion data # huggingface repo with multiple named configurations/subsets - path: bigcode/commitpackft name: - ruby - python - typescript type: ... # unimplemented custom format # fastchat conversation # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py - path: ... type: sharegpt conversation: chatml # default: vicuna_v1.1 # local - path: data.jsonl # or json ds_type: json # see other options below type: alpaca # dataset with splits, but no train split - path: knowrohit07/know_sql type: context_qa.load_v2 train_on_split: validation # loading from s3 or gcs # s3 creds will be loaded from the system default and gcs only supports public access - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs. ... # Loading Data From a Public URL # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly. - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP. ds_type: json # this is the default, see other options below.
    • loading

      load_in_4bit: true load_in_8bit: true bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically. fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32 tf32: true # require >=ampere bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision) float16: true # use instead of fp16 when you don't want AMP

      Note: Repo does not do 4-bit quantization.

    • lora

      adapter: lora # 'qlora' or leave blank for full finetune lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: - q_proj - v_proj

    All Config Options

    See these docs for all config options.

    Train

    Run

    accelerate launch -m axolotl.cli.train your_config.yml

    [!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml

    Preprocess dataset

    You can optionally pre-tokenize dataset with the following before finetuning. This is recommended for large datasets.

    • Set dataset_prepared_path: to a local folder for saving and loading pre-tokenized dataset.
    • (Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.
    • (Optional): Use --debug to see preprocessed examples.
    python -m axolotl.cli.preprocess your_config.yml

    Multi-GPU

    Below are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed is the recommended multi-GPU option currently because FSDP may experience loss instability.

    DeepSpeed

    Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU's VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated

    We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.

    deepspeed: deepspeed_configs/zero1.json
    accelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json
    FSDP
    • llama FSDP
    fsdp: - full_shard - auto_wrap fsdp_config: fsdp_offload_params: true fsdp_state_dict_type: FULL_STATE_DICT fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
    FSDP + QLoRA

    Axolotl supports training with FSDP and QLoRA, see these docs for more information.

    Weights & Biases Logging

    Make sure your WANDB_API_KEY environment variable is set (recommended) or you login to wandb with wandb login.

    • wandb options
    wandb_mode: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model:
    Special Tokens

    It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer's vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:

    special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" tokens: # these are delimiters - "<|im_start|>" - "<|im_end|>"

    When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer's vocabulary.

    Inference Playground

    Axolotl allows you to load your model in an interactive terminal playground for quick experimentation. The config file is the same config file used for training.

    Pass the appropriate flag to the inference command, depending upon what kind of model was trained:

    • Pretrained LORA:
      python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
    • Full weights finetune:
      python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
    • Full weights finetune w/ a prompt from a text file:
      cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \ --base_model="./completed-model" --prompter=None --load_in_8bit=True

    -- With gradio hosting

    python -m axolotl.cli.inference examples/your_config.yml --gradio

    Please use --sample_packing False if you have it on and receive the error similar to below:

    RuntimeError: stack expects each tensor to be equal size, but got [1, 32, 1, 128] at entry 0 and [1, 32, 8, 128] at entry 1

    Merge LORA to base

    The following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.

    python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"

    You may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with

    CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...

    although this will be very slow, and using the config options above are recommended instead.

    [openaccess-ai-collective/axolotl] src/axolotl/cli/__init__.py
    project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) src_dir = os.path.join(project_root, "src") sys.path.insert(0, src_dir) configure_logging() LOG = logging.getLogger("axolotl.scripts") os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
    [openaccess-ai-collective/axolotl] examples/jeopardy-bot/config.yml
    base_model: huggyllama/llama-7b model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: false datasets: - path: openaccess-ai-collective/jeopardy type: jeopardy dataset_prepared_path: val_set_size: 0.02 adapter: lora_model_dir: sequence_len: 512 max_packed_sequence_len: lora_r: lora_alpha: lora_dropout: lora_target_modules: lora_fan_in_fan_out: false wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: output_dir: ./outputs/jeopardy-bot-7b gradient_accumulation_steps: 1 micro_batch_size: 1 num_epochs: 4 optimizer: adamw_bnb_8bit torchdistx_path: lr_scheduler: cosine learning_rate: 0.00003 train_on_inputs: false group_by_length: false bf16: auto tf32: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 5 xformers_attention: true flash_attention: gptq_groupsize: gptq_model_v1: warmup_steps: 20 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>"
    [openaccess-ai-collective/axolotl] scripts/cloud-entrypoint.sh
    #!/bin/bash # Export specific ENV variables to /etc/rp_environment echo "Exporting environment variables..." printenv | grep -E '^RUNPOD_|^PATH=|^_=' | sed 's/^\(.*\)=\(.*\)$/export \1="\2"/' >> /etc/rp_environment echo 'source /etc/rp_environment' >> ~/.bashrc add_keys_to_authorized() { local key_value=$1 # Create the ~/.ssh directory and set permissions mkdir -p ~/.ssh chmod 700 ~/.ssh # Create the authorized_keys file if it doesn't exist touch ~/.ssh/authorized_keys # Initialize an empty key variable local key="" # Read the key variable word by word for word in $key_value; do # Check if the word looks like the start of a key if [[ $word == ssh-* ]]; then # If there's a key being built, add it to the authorized_keys file if [[ -n $key ]]; then echo $key >> ~/.ssh/authorized_keys fi # Start a new key key=$word else # Append the word to the current key key="$key $word" fi done # Add the last key to the authorized_keys file if [[ -n $key ]]; then echo $key >> ~/.ssh/authorized_keys fi # Set the correct permissions chmod 600 ~/.ssh/authorized_keys chmod 700 -R ~/.ssh } if [[ $PUBLIC_KEY ]]; then # runpod add_keys_to_authorized "$PUBLIC_KEY" # Start the SSH service in the background service ssh start elif [[ $SSH_KEY ]]; then # latitude.sh add_keys_to_authorized "$SSH_KEY" # Start the SSH service in the background service ssh start else echo "No PUBLIC_KEY or SSH_KEY environment variable provided, not starting openSSH daemon" fi # Check if JUPYTER_PASSWORD is set and not empty if [ -n "$JUPYTER_PASSWORD" ]; then # Set JUPYTER_TOKEN to the value of JUPYTER_PASSWORD export JUPYTER_TOKEN="$JUPYTER_PASSWORD" fi if [ "$JUPYTER_DISABLE" != "1" ]; then # Run Jupyter Lab in the background jupyter lab --port=8888 --ip=* --allow-root --ServerApp.allow_origin=* & fi if [ ! -d "/workspace/data/axolotl-artifacts" ]; then mkdir -p /workspace/data/axolotl-artifacts fi if [ ! -L "/workspace/axolotl/outputs" ]; then ln -sf /workspace/data/axolotl-artifacts /workspace/axolotl/outputs fi # Execute the passed arguments (CMD) exec "$@"
    [openaccess-ai-collective/axolotl] README.md

    Axolotl

    Axolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.

    Features:

    • Train various Huggingface models such as llama, pythia, falcon, mpt
    • Supports fullfinetune, lora, qlora, relora, and gptq
    • Customize configurations using a simple yaml file or CLI overwrite
    • Load different dataset formats, use custom formats, or bring your own tokenized datasets
    • Integrated with xformer, flash attention, rope scaling, and multipacking
    • Works with single GPU or multiple GPUs via FSDP or Deepspeed
    • Easily run with Docker locally or on the cloud
    • Log results and optionally checkpoints to wandb or mlflow
    • And more!
    <a href="https://www.phorm.ai/query?projectId=e315ba4a-4e14-421f-ab05-38a1f9076f25"> </a> <table> <tr> <td>

    Table of Contents

    </td> <td> <div align="center"> <img src="image/axolotl.png" alt="axolotl" width="160"> <div> <p> <b>Axolotl provides a unified repository for fine-tuning <br />a variety of AI models with ease</b> </p> <p> Go ahead and Axolotl questions!! </p> <img src="https://github.com/OpenAccess-AI-Collective/axolotl/actions/workflows/pre-commit.yml/badge.svg?branch=main" alt="pre-commit"> <img alt="PyTest Status" src="https://github.com/OpenAccess-AI-Collective/axolotl/actions/workflows/tests.yml/badge.svg?branch=main"> </div> </div> </td> </tr> </table>

    Axolotl supports

    | | fp16/fp32 | lora | qlora | gptq | gptq w/flash attn | flash attn | xformers attn | |-------------|:----------|:-----|-------|------|-------------------|------------|--------------| | llama | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Mistral | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Mixtral-MoE | ✅ | ✅ | ✅ | ❓ | ❓ | ❓ | ❓ | | Mixtral8X22 | ✅ | ✅ | ✅ | ❓ | ❓ | ❓ | ❓ | | Pythia | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ | | cerebras | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ | | btlm | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ | | mpt | ✅ | ❌ | ❓ | ❌ | ❌ | ❌ | ❓ | | falcon | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❓ | | gpt-j | ✅ | ✅ | ✅ | ❌ | ❌ | ❓ | ❓ | | XGen | ✅ | ❓ | ✅ | ❓ | ❓ | ❓ | ✅ | | phi | ✅ | ✅ | ✅ | ❓ | ❓ | ❓ | ❓ | | RWKV | ✅ | ❓ | ❓ | ❓ | ❓ | ❓ | ❓ | | Qwen | ✅ | ✅ | ✅ | ❓ | ❓ | ❓ | ❓ | | Gemma | ✅ | ✅ | ✅ | ❓ | ❓ | ✅ | ❓ |

    ✅: supported ❌: not supported ❓: untested

    Quickstart ⚡

    Get started with Axolotl in just a few steps! This quickstart guide will walk you through setting up and running a basic fine-tuning task.

    Requirements: Python >=3.10 and Pytorch >=2.1.1.

    git clone https://github.com/OpenAccess-AI-Collective/axolotl cd axolotl pip3 install packaging ninja pip3 install -e '.[flash-attn,deepspeed]'

    Usage

    # preprocess datasets - optional but recommended CUDA_VISIBLE_DEVICES="" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml # finetune lora accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml # inference accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \ --lora_model_dir="./outputs/lora-out" # gradio accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \ --lora_model_dir="./outputs/lora-out" --gradio # remote yaml files - the yaml config can be hosted on a public URL # Note: the yaml config must directly link to the **raw** yaml accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/examples/openllama-3b/lora.yml

    Advanced Setup

    Environment

    Docker

    docker run --gpus '"all"' --rm -it winglian/axolotl:main-latest

    Or run on the current files for development:

    docker compose up -d

    [!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide's section on Docker.

    <details> <summary>Docker advanced</summary>

    A more powerful Docker command to run would be this:

    docker run --privileged --gpus '"all"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src="${PWD}",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-latest

    It additionally:

    • Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args.
    • Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args.
    • The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal.
    • The --privileged flag gives all capabilities to the container.
    • The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.

    More information on nvidia website

    </details>

    Conda/Pip venv

    1. Install python >=3.10

    2. Install pytorch stable https://pytorch.org/get-started/locally/

    3. Install Axolotl along with python dependencies

      pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'
    4. (Optional) Login to Huggingface to use gated models/datasets.

      huggingface-cli login

      Get the token at huggingface.co/settings/tokens

    Cloud GPU

    For cloud GPU providers that support docker images, use winglian/axolotl-cloud:main-latest

    Bare Metal Cloud GPU

    LambdaLabs
    <details> <summary>Click to Expand</summary>
    1. Install python
    sudo apt update sudo apt install -y python3.10 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1 sudo update-alternatives --config python # pick 3.10 if given option python -V # should be 3.10
    1. Install pip
    wget https://bootstrap.pypa.io/get-pip.py python get-pip.py
    1. Install Pytorch https://pytorch.org/get-started/locally/

    2. Follow instructions on quickstart.

    3. Run

    pip3 install protobuf==3.20.3 pip3 install -U --ignore-installed requests Pillow psutil scipy
    1. Set path
    export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
    </details>
    GCP
    <details> <summary>Click to Expand</summary>

    Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.

    Make sure to run the below to uninstall xla.

    pip uninstall -y torch_xla[tpu]
    </details>

    Windows

    Please use WSL or Docker!

    Mac

    Use the below instead of the install method in QuickStart.

    pip3 install -e '.'
    

    More info: mac.md

    Google Colab

    Please use this example notebook.

    Launching on public clouds via SkyPilot

    To launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:

    pip install "skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]" # choose your clouds sky check

    Get the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:

    git clone https://github.com/skypilot-org/skypilot.git
    cd skypilot/llm/axolotl
    

    Use one command to launch:

    # On-demand HF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN # Managed spot (auto-recovery on preemption) HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET

    Launching on public clouds via dstack

    To launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.

    Write a job description in YAML as below:

    # dstack.yaml type: task image: winglian/axolotl-cloud:main-20240429-py3.11-cu121-2.2.2 env: - HUGGING_FACE_HUB_TOKEN - WANDB_API_KEY commands: - accelerate launch -m axolotl.cli.train config.yaml ports: - 6006 resources: gpu: memory: 24GB.. count: 2

    then, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:

    pip install dstack HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot

    For further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.

    Dataset

    Axolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.

    See these docs for more information on how to use different dataset formats.

    Config

    See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:

    • model

      base_model: ./llama-7b-hf # local or huggingface repo

      Note: The code will load the right architecture.

    • dataset

      datasets: # huggingface repo - path: vicgalle/alpaca-gpt4 type: alpaca # huggingface repo with specific configuration/subset - path: EleutherAI/pile name: enron_emails type: completion # format from earlier field: text # Optional[str] default: text, field to use for completion data # huggingface repo with multiple named configurations/subsets - path: bigcode/commitpackft name: - ruby - python - typescript type: ... # unimplemented custom format # fastchat conversation # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py - path: ... type: sharegpt conversation: chatml # default: vicuna_v1.1 # local - path: data.jsonl # or json ds_type: json # see other options below type: alpaca # dataset with splits, but no train split - path: knowrohit07/know_sql type: context_qa.load_v2 train_on_split: validation # loading from s3 or gcs # s3 creds will be loaded from the system default and gcs only supports public access - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs. ... # Loading Data From a Public URL # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly. - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP. ds_type: json # this is the default, see other options below.
    • loading

      load_in_4bit: true load_in_8bit: true bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically. fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32 tf32: true # require >=ampere bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision) float16: true # use instead of fp16 when you don't want AMP

      Note: Repo does not do 4-bit quantization.

    • lora

      adapter: lora # 'qlora' or leave blank for full finetune lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: - q_proj - v_proj

    All Config Options

    See these docs for all config options.

    Train

    Run

    accelerate launch -m axolotl.cli.train your_config.yml

    [!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml

    Preprocess dataset

    You can optionally pre-tokenize dataset with the following before finetuning. This is recommended for large datasets.

    • Set dataset_prepared_path: to a local folder for saving and loading pre-tokenized dataset.
    • (Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.
    • (Optional): Use --debug to see preprocessed examples.
    python -m axolotl.cli.preprocess your_config.yml

    Multi-GPU

    Below are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed is the recommended multi-GPU option currently because FSDP may experience loss instability.

    DeepSpeed

    Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU's VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated

    [openaccess-ai-collective/axolotl] scripts/cloud-entrypoint-term.sh
    #!/bin/bash # Export specific ENV variables to /etc/rp_environment echo "Exporting environment variables..." printenv | grep -E '^RUNPOD_|^PATH=|^_=' | sed 's/^\(.*\)=\(.*\)$/export \1="\2"/' >> /etc/rp_environment conda init # this needs to come after conda init echo 'source /etc/rp_environment' >> ~/.bashrc add_keys_to_authorized() { local key_value=$1 # Create the ~/.ssh directory and set permissions mkdir -p ~/.ssh chmod 700 ~/.ssh # Create the authorized_keys file if it doesn't exist touch ~/.ssh/authorized_keys # Initialize an empty key variable local key="" # Read the key variable word by word for word in $key_value; do # Check if the word looks like the start of a key if [[ $word == ssh-* ]]; then # If there's a key being built, add it to the authorized_keys file if [[ -n $key ]]; then echo $key >> ~/.ssh/authorized_keys fi # Start a new key key=$word else # Append the word to the current key key="$key $word" fi done # Add the last key to the authorized_keys file if [[ -n $key ]]; then echo $key >> ~/.ssh/authorized_keys fi # Set the correct permissions chmod 600 ~/.ssh/authorized_keys chmod 700 -R ~/.ssh } if [[ $PUBLIC_KEY ]]; then # runpod add_keys_to_authorized "$PUBLIC_KEY" # Start the SSH service in the background service ssh start elif [[ $SSH_KEY ]]; then # latitude.sh add_keys_to_authorized "$SSH_KEY" # Start the SSH service in the background service ssh start else echo "No PUBLIC_KEY or SSH_KEY environment variable provided, not starting openSSH daemon" fi # Check if JUPYTER_PASSWORD is set and not empty if [ -n "$JUPYTER_PASSWORD" ]; then # Set JUPYTER_TOKEN to the value of JUPYTER_PASSWORD export JUPYTER_TOKEN="$JUPYTER_PASSWORD" fi if [ "$JUPYTER_DISABLE" != "1" ]; then # Run Jupyter Lab in the background jupyter lab --port=8888 --ip=* --allow-root --ServerApp.allow_origin=* & fi if [ ! -d "/workspace/data/axolotl-artifacts" ]; then mkdir -p /workspace/data/axolotl-artifacts fi if [ ! -L "/workspace/axolotl/outputs" ]; then ln -sf /workspace/data/axolotl-artifacts /workspace/axolotl/outputs fi # Execute the passed arguments (CMD) exec "$@"
OpenAccess-AI-Collective/axolotl
huggingface/transformers
huggingface/peft
huggingface/accelerate