Function calling

Function calling

GreenNode MaaS supports function calling, allowing you to define a set of tools or functions that the model can reason about and invoke intelligently based on the conversation context. This enables the creation of dynamic, interactive agents capable of retrieving real-time data and producing structured, actionable outputs.  
Importantly, GreenNode's function calling API does not directly execute these functions. Instead, it returns tool invocation instructions in a format compatible with OpenAI’s function calling schema.  
The model outputs a JSON object specifying which function(s) to call and with what arguments. You can then execute those functions in your backend and pass the results back to the model in follow-up API calls—enabling a continuous, context-aware interaction loop.

How Function Calling Works in GreenNode MaaS

  1. Define Available Tools: Start by defining a set of tools or functions the model can access. Each tool is described using a JSON Schema, outlining its name, parameters, and structure. These tools are then provided to the model at runtime along with the user's query.  
  2. Model Determines Intent: GreenNode’s model analyzes the user’s input to identify intent. Depending on the context, it responds with either a natural-language reply or generates a function call by selecting a relevant tool and filling in the required arguments based on the defined schema.  
  3. Execute and Iterate: When the model suggests a function call, your system executes the function externally. The result is then passed back to the model in a follow-up API call. This creates an interactive loop, allowing the model to reason with fresh data, continue the conversation, or suggest additional actions.

Examples:

  1. import json
  2. import os  
  3. from pydantic import BaseModel, Field
  4. from typing import Literal
  5. from openai import OpenAI

  6. client = OpenAI(base_url="https://maas.api.greennode.ai/v1",   
  7.                            api_key= os.getenv('GREENNODE_API_KEY'))  

  8. model = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct"

  9. class GetCurrentWeatherParams(BaseModel):
  10.     city: str = Field(
  11.         ...,
  12.         description="The city to find the weather for, e.g. 'San Francisco'"
  13.     )
  14.     state: str = Field(
  15.         ...,
  16.         description=(
  17.             "The two-letter abbreviation for the state that the city is in, "
  18.             "e.g. 'CA' for California"
  19.         )
  20.     )
  21.     unit: Literal['celsius', 'fahrenheit'] = Field(
  22.         ...,
  23.         description="The unit to fetch the temperature in"
  24.     )
  25. tools = [{
  26.     "type": "function",
  27.     "function": {
  28.         "name": "get_current_weather",
  29.         "description": "Get the current weather in a given location",
  30.         "parameters": GetCurrentWeatherParams.model_json_schema()
  31.     }
  32. }]  
  33. messages = [
  34.     {
  35.         "role": "user",
  36.         "content": "Hi! How are you doing today?"
  37.     },
  38.     {
  39.         "role": "assistant",
  40.         "content": "I'm doing well! How can I help you?"
  41.     },
  42.     {
  43.         "role": "user",
  44.         "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?"
  45.     }
  46. ]  
  47. chat_completion = client.chat.completions.create(
  48.     messages=messages,
  49.     model=model,
  50.     tools=tools,
  51.     tool_choice={
  52.         "type": "function",
  53.         "function": {
  54.             "name": "get_current_weather"
  55.         }
  56.     } )
  57. messages.append({
  58.     "role": "assistant",
  59.     "tool_calls": chat_completion.choices[0].message.tool_calls
  60. })
  61.  # Simulate a tool call
  62. def get_current_weather(city: str, state: str, unit: 'str'):
  63.     return (
  64.       f"The weather in {city}, {state} is 85 degrees {unit}. "
  65.       "It is partly cloudy, with highs in the 90's."
  66.     )  
  67. available_tools = {"get_current_weather": get_current_weather}
  68. completion_tool_calls = chat_completion.choices[0].message.tool_calls
  69. for call in completion_tool_calls:
  70.     tool_to_call = available_tools[call.function.name]
  71.     args = json.loads(call.function.arguments)
  72.     result = tool_to_call(**args)
  73.     print(result)
  74.     messages.append({
  75.         "role": "tool",
  76.         "content": result,
  77.         "tool_call_id": call.id,
  78.         "name": call.function.name
  79.     })

  80. print(messages)
Messages array with response:
  1. [
  2.     {
  3.         'role': 'user',
  4.          'content': 'Hi! How are you doing today?'
  5.     },
  6.     {
  7.         'role': 'assistant',
  8.          'content': "I'm doing well! How can I help you?"
  9.     },
  10.     {
  11.         'role': 'user',
  12.          'content': 'Can you tell me what the temperature will be in Dallas, in Fahrenheit?'
  13.     },
  14.     {
  15.         'role': 'assistant',
  16.          'tool_calls': [
  17.             ChatCompletionMessageToolCall(
  18.                 id='chatcmpl-tool-3a9ce97325f2458ca86f2542d4987638',
  19.                 function=Function(
  20.                     arguments='{"city": "Dallas", "state": "Texas", "unit": "fahrenheit"}',
  21.                     name='get_current_weather'
  22.                 ),
  23.                 type='function'
  24.             )
  25.         ]
  26.      },
  27.      {
  28.         'role': 'tool',
  29.         'content': "The weather in Dallas, Texas is 85 degrees fahrenheit. It is partly cloudy, with highs in the 90's.",
  30.         'tool_call_id': 'chatcmpl-tool-3a9ce97325f2458ca86f2542d4987638',
  31.         'name': 'get_current_weather'
  32.     }
  33. ]


    • Related Articles

    • Prepare Dataset for Model Tuning

      The GreenNode format is a specialized structure tailored for Model Tuning, offering flexibility and scalability to ensure seamless compatibility. GreenNode format structures data to include roles, content, and optional system messages, ensuring ...
    • Managed SLURM service

      1. Introduction to SLURM SLURM (Simple Linux Utility for Resource Management) is an open-source job scheduler designed for Linux high-performance computing clusters. It efficiently allocates compute resources to user jobs, provides tools for job ...