Create an Endpoint for online prediction

Create an Endpoint for online prediction

After training and registering the model, the online prediction component enables the deployment and serving of models to make real-time predictions or inferences on new data. This component provides endpoints or APIs that can be integrated into applications, allowing them to leverage the trained models to process new data and generate predictions.

Step 1: Accessing the Online Prediction

Step 2: Create an Endpoint

  • Location & Endpoint name: Select location & a specific name for this endpoint prediction.

  • Select a model to deploy this endpoint, which was registered as Model Registry in the previous steps.

  • Resource configure: Specify the CPU, GPU, and RAM configurations based on your workload.

  • Replica configuration: You can effectively specify the minimum and maximum replica counts for your service, enabling dynamic scaling to meet fluctuating demand while maintaining optimal performance and resource efficiency.

    • Minimum replica count: Determine the minimum number of replicas required to ensure adequate service availability and performance under normal operating conditions.

    • Maximum replica count: Identify the maximum number of replicas that the service can scale up to efficiently without compromising performance, resource availability, or cost constraints.

    • Advanced configuration: Specify the threshold of CPU, RAM, GPU Utilization & Response latency to define the maximum allowable usage of these resources.

  • Click the "Create endpoint" button to run your online prediction with the specified configurations at the bottom right corner.

  • Navigate to the Monitoring section to view logs generated during the online prediction process.

    • Related Articles

    • Create an endpoint

      After training and registering the model, the online prediction component enables the deployment and serving of models to make real-time predictions or inferences on new data. This component provides endpoints or APIs that can be integrated into ...
    • Local Storage Limits for Notebook, Model Training, and Online Prediction

      To ensure optimal performance and cost-efficiency, our platform provides a certain amount of local storage included with each compute instance you create. However, exceeding this storage limit can impact your workflow and results. This guide will ...
    • Deploy a model endpoint with custom container

      In this section, you will learn how to deploy and serve your custom machine-learning models using a custom container in our Greennode AI Platform. Follow the steps below to configure and deploy your model for online prediction: Step 0: Import a model ...
    • Manage a model endpoint

      This guide will walk you through the key features and steps involved in deploying your models, optimizing costs through undeployment, and removing endpoints when they are no longer needed. After creating a model endpoint, follow these steps to ...
    • Create a notebook instance with custom container

      Custom images allow you to create a notebook environment that is precisely tailored to your specific needs. You can pre-install specific libraries, frameworks, or tools that are not included in the standard application-supported images. This saves ...