OpenAI API Calls

Python
JavaScript
OpenAI API
Git

A. Purpose

This page demonstrates two production-ready patterns for integrating Large Language Models into automation systems using direct API calls rather than platform-specific nodes. The examples show how LLMs can be safely embedded into backend services, workflow engines, or automation pipelines where direct control, validation, and execution boundaries are required.

The first pattern uses a Standard Chat Completion for non-blocking, human-readable output such as summaries, explanations, or light classification. The second pattern uses a Function / Tool Call to return structured intent and arguments that can be validated and executed deterministically by backend code.

These patterns are used when automation workflows require platform-agnostic LLM integration, custom validation logic, or execution outside low-code automation nodes, allowing AI to be incorporated without sacrificing reliability, auditability, or system control.

When to choose which:

  • Standard Chat Completion: When you need narrative text or loose JSON.
  • Function/Tool: When you need strict args + action execution.

B. Standard Chat Completion Call

This pattern demonstrates how LLMs can be used within automation systems to generate non-deterministic, human-readable output without directly triggering actions. The model returns free-form text that can be reviewed, logged, summarized, or passed downstream for optional use. This approach is appropriate when AI is used to assist workflows rather than control them, such as summarizing inputs, explaining results, drafting content, or performing light classification where strict schema enforcement is not required.

Output Format

  • Natural language
  • JSON-only (use a JSON-only response mode, still no tool selection)

Side Effects

  • None (you read it and decide what to do)

Use Cases

  • Summarize long documents, logs, or meetings into tight bullets.
  • Explain code, stack traces, or concepts in plain English.
  • Draft copy for emails, help text, release notes, documentation.
  • Transform text by rewriting the tone, translating, expanding, or condensing.
  • Light extraction/classification when perfect schema isn't required.

Code Example

Asks the model a question and prints the free-form answer.

Python
            
      import os
      import requests

      API_KEY = os.getenv("OPENAI_API_KEY")

      payload = {
          "model": "gpt-4o-mini",
          "messages": [
              {
                  "role": "user",
                  "content": "Explain what a schema is in one sentence."
              }
          ]
      }

      response = requests.post(
          "https://api.openai.com/v1/chat/completions",
          headers={
              "Authorization": f"Bearer {API_KEY}",
              "Content-Type": "application/json"
          },
          json=payload,
          timeout=30
      )

      data = response.json()
      content = data["choices"][0]["message"]["content"]
      print(content)
            
          

C. Function/Tool Call

This pattern demonstrates how LLMs can be integrated into automation workflows as a controlled decision-making component. Instead of returning free-form text, the model outputs a structured payload containing a tool name and validated arguments that represent explicit intent. The automation system remains fully in control: arguments are validated against a predefined schema, and no side effects occur until backend code executes the corresponding action. This pattern is used when AI must participate in reliable automation flows such as routing, persistence, system updates, or external API execution.

Output Format

  • function_call.name + function_call.arguments (JSON)

Side Effects

  • Happen only after your code validates arguments against your schema

Use Cases

  • Execute actions via backend code such as creating tickets, querying DBs, calling third-party APIs, sending emails, and running CLI (Command Line Interface) tasks.
  • Fetch structured data with strict arguments (e.g., city, date, limit).
  • Validate inputs against a JSON schema before touching systems.
  • Orchestrate workflows where the model selects tools and passes precise arguments.

Code Example

Defines a tool schema and the model returns function_call.name + function_call.arguments.

Python
            
      import os
      import json
      import requests

      API_KEY = os.getenv("OPENAI_API_KEY")

      functions = [
          {
              "name": "get_current_weather",
              "description": "Get the current weather in a given location",
              "parameters": {
                  "type": "object",
                  "properties": {
                      "location": {"type": "string"},
                      "unit": {
                          "type": "string",
                          "enum": ["celsius", "fahrenheit"]
                      }
                  },
                  "required": ["location"]
              }
          }
      ]

      payload = {
          "model": "gpt-4o-mini",
          "messages": [
              {
                  "role": "user",
                  "content": "What's the weather like in Chicago in celsius?"
              }
          ],
          "functions": functions,
          "function_call": "auto"
      }

      response = requests.post(
          "https://api.openai.com/v1/chat/completions",
          headers={
              "Authorization": f"Bearer {API_KEY}",
              "Content-Type": "application/json"
          },
          json=payload,
          timeout=30
      )

      data = response.json()
      message = data["choices"][0]["message"]

      fn_name = message.get("function_call", {}).get("name")
      fn_args = message.get("function_call", {}).get("arguments")

      print("function_call.name:", fn_name)
      print("function_call.arguments:")
      print(json.dumps(json.loads(fn_args), indent=2))