Create a tuning job

Create a tuning job

To create a supervised tuning job, you'll need to provide the following information:

Steps to Create a Supervised Tuning Job

  1. Access the Tuning Job Creation Interface: Use the provider's platform through the url: .
  2. Fill in the Input Parameters: Provide the required information for each parameter.
  3. Review and Submit: Carefully review your input parameters and submit the job.
  4. Monitor the Job: Track the progress of your tuning job through the platform's interface.

      Understanding the Input Parameters

  1. Tuning Job Name: A unique name for your tuning job.
  2. Location: The geographic location where the tuning job will run (e.g., thailand bangkok).
  3. Tuning Type: Specify "supervised_tuning" for this type of job.
  4. Base Model: Select a pre-trained base model from the provider's library (e.g., GPT-3, BERT).
  5. Learning Rate Multiplier: Adjust the learning rate of the optimizer.
  6. Adapter Size: Set the size of the adapter layers added to the base model.
  7. Instance Type: Choose the type of computing instances to use for training (e.g., n1-standard-4).
  8. Instance Count: Specify the number of instances to use for parallel training.
  9. Network Volume: Select the network volume where your training and validation datasets are stored.
  10. Input Tuning Dataset: Provide the path to the input training data within the network volume.
  11. Validate Input Data: Optionally upload a sample dataset to verify its format.
  12. Validation Tuning Dataset: Specify the path to the validation dataset within the network volume.
  13. Output Tuned Model: Provide the path where the tuned model will be saved.
  14. Number of Epochs: Set the number of times the model will iterate over the entire training dataset.
  15. Batch Size: Specify the number of training examples processed in each batch.
  16. Suffix: Add a suffix to the output model filename.
  17. Cutoff Tokens: Set the maximum number of tokens in the input sequence.
  18. Gradient Accumulation Step: Accumulate gradients over multiple steps before updating the model.
  19. Number of Saved Checkpoints: Specify the number of checkpoints to save during training.
  20. Saved Interval Steps: Set the interval (in steps) at which to save checkpoints.
  21. Validation Interval Steps: Set the interval (in steps) at which to evaluate the model on the validation set.
  22. Weights & Biases (WandB) API Key: Provide your WandB API key to log and visualize the training process.
  23. WandB Project: Specify the WandB project name for organizing your experiments.

Steps to Create a RLHF Tuning Job

To create an RLHF tuning job, you'll need to provide the following information, in addition to the parameters for supervised tuning:
  1. Access the Tuning Job Creation Interface: Use the provider's platform through the url.
  2. Fill in the Input Parameters: Provide the required information for each parameter, including the additional RLHF-specific parameters. Additional Parameters for RLHF Tuning:
    1. Reward Learning Rate Multiplier: Adjust the learning rate for the reward model.
    2. Reward Training Dataset: Specify the path to the reward training dataset within the network volume.
    3. Reward Batch Size: Specify the batch size for reward model training.
    4. Reward Gradient Accumulation Step: Accumulate gradients over multiple steps before updating the reward model.
  3. Review and Submit: Carefully review your input parameters and submit the job.
  4. Monitor the Job: Track the progress of your tuning job through the platform's interface.

    • Related Articles

    • Manage a model tuning job

      Model tuning, also known as hyperparameter optimization, is the process of adjusting the hyperparameters of a machine learning model to improve its performance. Hyperparameters are settings that determine the learning process of a model and are not ...
    • Prepare Dataset for Model Tuning

      The GreenNode format is a specialized structure tailored for Model Tuning, offering flexibility and scalability to ensure seamless compatibility. GreenNode format structures data to include roles, content, and optional system messages, ensuring ...
    • Create a network volume

      How to Create a Network Volume on Greennode AI Platform? This guide will walk you through the process of creating a new volume on your AI platform to store data, models, and other necessary files. Step 1: Access the Volume Creation Interface Log in: ...
    • Create a Network Volume

      How to Create a Network Volume on Greennode AI Platform? This guide will walk you through the process of creating a new volume on your AI platform to store data, models, and other necessary files. Step 1: Access the Volume Creation Interface Log in: ...
    • Create an endpoint

      After training and registering the model, the online prediction component enables the deployment and serving of models to make real-time predictions or inferences on new data. This component provides endpoints or APIs that can be integrated into ...