Skip to main content

Fine-tuning (Beta)

In order to fine-tune large generic models for your specific purpose, you may fine-tune models on Friendli Dedicated Endpoints.

Effortlessly fine-tune your model with Friendli Dedicated Endpoints, which leverages the Parameter-Efficient Fine-Tuning (PEFT) method to reduce training costs while preserving model quality, similar to full-parameter fine-tuning. This can make your model become an expert on a specific topic, and prevent hallucinations from your model.

Table of Contents

  1. How to Select Your Base Model
  2. How to Upload Your Dataset
  3. How to Create Your Fine-tuning Job
  4. How to Monitor Progress
  5. How to Deploy the Fine-tuned Model
  6. Resources

By the end of this guide, you will understand how you can effectively fine-tune your generative AI models by using Friendli Dedicated Endpoints.

How to Select Your Base Model

Through our (1) Hugging Face Integration and (2) Weights & Biases (W&B) Integration, you can select the base model to fine-tune. Explore and find open-source models that are supported on Friendli Dedicated Endpoints here. For guidance on the necessary format and file requirements, especially when using your own models, review the FAQ section on general requirements for a model.

  • Hugging Face Model Hugging FaceModel

  • Weights & Biases Model Weights & Biases Model

Hugging Face Integration

Integrate your Hugging Face account to access your private repo or a gated repo. Go to User settings > Account > Hugging Face integration and save your Hugging Face access token. This access token will be used upon creating your fine-tuning jobs.

info

Check our FAQ section on using a Hugging Face repository as a model and integrating a Hugging Face account for more detailed integration information. .

Weights & Biases (W&B) Integration

Integrate your Weights & Biases account to access your model artifact. Go to User settings > Account > Weights & Biases integration and save your Weights & Biases API key, which you can obtain here. This API key will be used upon creating your fine-tuning jobs.

info

Check our FAQ section on using a W&B artifact as a model and integrating a W&B account for more detailed integration information.

How to Upload Your Dataset

Navigate to the ‘Datasets’ section within your dedicated endpoints project page to upload your fine-tuning dataset. Enter the dataset name, then either drag and drop your .jsonl training and validation files or browse for them on your computer. If your files meet the required criteria, the blue 'Upload' button will be activated, allowing you to complete the process.

upload-dataset

uploaded-dataset

Dataset Format

The dataset used for fine-tuning should satisfy the following conditions:

  1. The dataset must contain a column named “messages”, which will be used for fine-tuning.
  2. Each row in the "messages" column should be compatible with the chat template of the base model. For example, tokenizer_config.json of mistralai/Mistral-7B-Instruct-v0.2 is a template that repeats the messages of a user and an assistant. Concretely, each row in the "messages" field should follow a format like: [{"role": "user", "content": "The 1st user's message"}, {"role": "assistant", "content": "The 1st assistant's message"}]. In this case, HuggingFaceH4/ultrachat_200k is a dataset that is compatible with the chat template.
note

You can access our example dataset ‘FriendliAI/gsm8k’ on Hugging Face and explore some of our quantized generative AI models on our Hugging Face page.

How to Create Your Fine-tuning Job

Navigate to the ‘Fine-tuning’ section within your dedicated endpoints project page to launch and view your fine-tuning jobs. You can view the training progress in a job's detail page by clicking on the fine-tuning job.

To create a new fine-tuning job, follow these steps:

  1. Go to your project and click on the Fine-tuning tab.
  2. Click New job.
  3. Fill out the job configuration based on the following field descriptions:
    • Job name: Name of fine-tuning job to create.
    • Model: Hugging Face Models repository or Weights & Biases model artifact name.
    • Dataset: Your uploaded fine-tuning dataset.
    • Weights & Biases (W&B): Optional for W&B integration.
      • W&B project: Your W&B project name.
    • Hyperparameters: Fine-tuning Hyperparameters.
      • Learning rate: Initial learning rate for AdamW optimizer.
      • Batch size: Total training batch size.
      • Total number of training: Configure the number of training cycles with either Number of training epochs or Training steps.
        • Number of training epochs: Total number of training epochs.
        • Training stepss: Total number of training steps.
      • Evaluation steps: Number of steps between model evaluation using the validation dataset.
      • LoRA rank: The rank of the LoRA parameters (optional).
      • LoRA alpha: Scaling factor that determines the influence of the low-rank matrices during fine-tuning (optional).
      • LoRA dropout: Dropout rate applied during fine-tuning (optional).
  4. Click the Create button to create a job with the inputed configuration.

How to Monitor Progress

After launching the fine-tuning job, you can monitor the job overview, including progress information and fine-tuning configuration.

If you have integrated your Weights & Biases (W&B) account, you can also monitor the training status in your W&B project. Read our FAQ section on using W&B with dedicated fine-tuning to learn more about monitoring you fine-tuning jobs on their platform.

How to Deploy the Fine-tuned Model

After the fine-tuning process is completed, you can immediately deploy the model by clicking the 'Deploy' button. The name of the fine-tuned LoRA adapter will be the same as your fine-tuning job name.

The steps to deploy the fine-tuned model are equivalent to how you would deploy a custom model on Friendli Dedicated Endpoints. For further information, refer to the Endpoints documentation for more detailed information on launching a model.

Resources

Beta Feature Information and Support

note

The fine-tuning feature is currently in Beta. While we strive to provide a stable and reliable experience, this feature is still under active development. As a result, you may encounter unexpected behavior or limitations. We encourage you to provide feedback to help us improve the feature before its official release.

  • Feature request & feedback

    If you have any suggestions, ideas, or feedback regarding the fine-tuning feature, please use the provided feature request & feedback link to share your thoughts. This will help us understand your needs and prioritize improvements.

  • Contact support

    If you encounter any issues or need assistance with the fine-tuning feature, please reach out to us through contact support. Our support team is here to help you resolve any problems and answer your questions.