Import a model registry with custom container

Import a model registry with custom container

The model registry is a centralized repository for storing trained models, their metadata, versions, and associated artifacts. It allows for version control, management, and organization of models developed during the training phase. This enables easy access, retrieval, and deployment of models for various purposes.

Step 1: Accessing the Model Registry

Step 2: Import a Model Registry

  • Location & Model registry name: Select cloud location & a specific name for this model

  • Choose the “Custom container” option in the “Container" section

    • Specify Image Storage URL:

      • Provide the URL of your custom container image stored in the container registry. This URL will be used by the Greennode AI Platform to pull the image during training.

    • Provide Credentials:

      • If your custom container image requires authentication to access, provide the necessary credentials (such as username and password or access token) to AI Platform.

      • Ensure that the credentials are securely stored and provided in the required format.

    • Configuring Ports and Health Checks:

      • Define the access port through which the prediction requests will be received.

      • Configure the metric port for monitoring model performance metrics.

      • Specify the health-check port and path for monitoring the health status of the prediction service.

    • Network Volume Configuration (optional):

      • If the model relies on data or artifacts stored on a network volume, ensure the volume is attached to the compute instance where you're running the online prediction.
  • Click the “Import” button to complete the process.

InfoImportant Note Regarding Custom Images: The /workspace directory
Currently, the Greennode AI platform has a rule that automatically mounts a Persistent Volume Claim (PVC) to the /network-volume directory within each Kubernetes Pod. This provides you with persistent storage for your data. However, if you use a Custom Image and place your code within the /network-volume directory of that image, this code will not be visible in the /network-volume directory of the running container when integrated into Kubernetes.

Therefore, when building a Custom Image, please avoid placing code or important data directly in the /network-volume directory. You can create a different subdirectory (e.g., ./app or ./code) and place your code there to avoid conflicts with the PVC.

    • Related Articles

    • Import a Model Registry

      The model registry is a centralized repository for storing trained models, their metadata, versions, and associated artifacts. It allows for version control, management, and organization of models developed during the training phase. This enables ...
    • Import a model registry with pre-built container

      Model Preparation Ensure your machine learning model is packaged and stored in a container image compatible with Triton Inference Server. Upload the container image containing your model to a storage location accessible by our AI Platform. The online ...
    • Deploy a model endpoint with custom container

      In this section, you will learn how to deploy and serve your custom machine-learning models using a custom container in our Greennode AI Platform. Follow the steps below to configure and deploy your model for online prediction: Step 0: Import a model ...
    • Import a Model Registry using Triton Server

      Model Preparation Since our AI Platform only accesses models from a Network Volume, you must first create a Network Volume. Pull your model from local file systems or cloud storage (AWS S3, Azure Blob, or GCS) into the Network Volume. Ensure the ...
    • Import a Model Registry using NVIDIA NIM

      Step 1: Accessing the Model Registry Log in to your GreenNode AI Platform account and navigate to the Model Registry Dashboard Model Registry Dashboard at: . Find and click on the "Import a model registry" button. Step 2: Import a Model Registry ...