AI Support Triage

n8n
Javscript
OpenAI API
Git
Full n8n workflow showing all steps.

A. System Overview

AI Support Triage was built to demonstrate how AI can be used responsibly to automate high-volume support intake without removing human oversight. The system focuses on classifying incoming requests, standardizing data, and drafting consistent first responses, allowing support teams to respond faster while maintaining control and visibility.

Highlights

  • Workflow Orchestration: n8n is used to coordinate intake, data transformation, AI processing, and downstream actions.
  • AI Processing: OpenAI's ChatGPT is used for intent classification and response drafting within clearly defined decision boundaries. The system is designed to be model-agnostic, allowing alternative LLM providers to be integrated without architectural changes.
  • Data Handling: Incoming support messages are normalized through a lightweight Python service that validates required fields, standardizes naming conventions, and prepares structured input for downstream AI classification and storage.
  • Persistence: Ticket data and AI decisions are designed to be stored in PostgreSQL for auditing and future analysis.
  • Version Control: All workflow logic and supporting code are tracked in GitHub to support iteration and maintainability.

AI Decision Schema

Before implementing AI-driven classification, the expected decisions and outputs are defined to ensure predictable behavior, auditability, and safe automation. This contract establishes what the system is allowed to decide and how those decisions are represented downstream.

Classification Outputs

  • Category: Billing, Bug, Feature Request, Account, Other
  • Urgency: Low, Medium, High
  • Confidence: Numeric score between 0 and 1

Sample Decision Output

By defining these outputs upfront, the system enforces clear boundaries on AI behavior and allows downstream automation, storage, and review processes to remain stable as the workflow evolves.

JSON
            
      {
        "category": "Billing",
        "urgency": "High",
        "confidence": 0.91
      }
            
          

Process Map

External Input
Incoming support request submitted from any channel as structured text plus metadata.
System Boundary
Intake & Classification
Accepts requests via webhook, normalizes input data, and classifies intent and urgency using an LLM.
AI Output Processing
Captures, cleans, parses, and validates AI output into structured, machine-readable fields.
Response
Returns a deterministic, structured acknowledgment to the originating system.
Persistence
Stores finalized classification decisions and metadata for auditing and downstream use.
  1. External Request Submission
    A support request is submitted from any external system as structured text plus metadata.
  2. Webhook Intake
    The request enters the system through a webhook, establishing a secure boundary between external input and internal processing.
  3. Normalization & Classification
    Incoming data is normalized and analyzed by an LLM to determine category, urgency, and confidence.
  4. AI Output Processing
    The raw AI response is captured, cleaned, parsed, and converted into structured, machine-readable fields.
  5. Persistence
    The finalized classification decision and metadata are stored for auditing and downstream use.
  6. Deterministic Response
    A minimal, structured acknowledgment is returned to the originating system, closing the request lifecycle.

B. Workflow

Segment 1: Intake & Classification

Purpose

The purpose of this segment is to receive incoming support requests, normalize the input data, and submit the standardized message to the LLM for classification.

Segment 1 of the n8n workflow showing webhook intake, data normalization, and AI classification steps.

1. Webhook Intake

Function

Rather than tying the system to a specific UI (form, email client, or chat app), a webhook provides a channel-agnostic entry point. Any system capable of sending an HTTP request can submit a support ticket. The Webhook node acts as the system's intake boundary. It receives incoming support requests as HTTP POST calls via the Webhook node and converts them into structured data that can be processed by the workflow.

Input

The webhook expects a JSON payload containing the user's support message in a predetermined structure.

JSON
              
      {
        "message": "My account was charged twice"
      }
              
            

Output

When the webhook receives this request, n8n wraps the incoming payload into its internal execution data structure. Because the intake form enforces a consistent payload structure, downstream nodes can reliably access the message as body.message without conditional handling.

The webhook node does no processing or validation. Its sole responsibility is to:

  • Accept the request
  • Make the raw data available to the workflow
  • Establish a clear boundary between external input and internal processing
All normalization, validation, and AI logic occur after this step.

JSON
              
      {
        "body": {
          "message": "My account was charged twice"
        }
      }
              
            

2. Data Normalization

Function

Incoming data from external systems should never be relied on directly by downstream logic. By normalizing the input early, the workflow establishes a stable internal data contract that all subsequent nodes can depend on. The Data Normalization step standardizes incoming request data into a consistent internal format. This is handled using the Edit Fields (Set) node to explicitly map and rename fields used throughout the workflow.

Input

At this stage, the incoming support message is accessed from the webhook payload as body.message.

JSON
              
      {
        "body": {
          "message": "My account was charged twice"
        }
      }
              
            

Output

The Edit Fields (Set) node maps the incoming message to a clearly named internal field called ticket_message. This renaming ensures that all downstream nodes reference a single, predictable field name. The Data Normalization step performs no validation or decision-making. Its sole responsibility is to:

  • Extract the support message from the webhook payload
  • Rename it into a consistent internal field
  • Isolate downstream logic from changes in the incoming request format
All classification, AI reasoning, and persistence occur after this step.

JSON
              
      {
        "ticket_message": "My account was charged twice"
      }
              
            

3. AI Classification (LLM)

Function

Support triage requires understanding what a request is about before any action can be taken. This step introduces AI as a decision-making component, allowing the system to consistently classify incoming requests while keeping those decisions explicit and inspectable. The AI Classification (LLM) step analyzes the normalized support message and determines its intent and urgency. This is handled using the OpenAI - Message a model node, which calls GPT-4.1-mini to perform structured classification.

Input

At this stage, the workflow operates exclusively on the normalized internal field ticket_message, ensuring the AI is never exposed to raw or variable input structures.

JSON
              
      {
        "ticket_message": "My account was charged twice"
      }
              
            

Output

The model is prompted to return only valid JSON describing the support request. The output follows a predefined decision contract that includes category, urgency, and a confidence score as defined in the Decision Contract section.The AI Classification step performs no routing, persistence, or response delivery. Its sole responsibility is to:

  • Analyze the support message
  • Assign a category and urgency level
  • Return a structured, machine-readable decision
This decision is then consumed by downstream steps for storage and response drafting.

JSON
              
      {
        "output": [
          {
            "id": "msg_06309f921a0d94f900694a9d15da448196ab919412ac67386d",
            "type": "message",
            "status": "completed",
            "content": [
              {
                "type": "output_text",
                "annotations": [],
                "logprobs": [],
                "text": "```json\n{\n  \"category\": \"Billing\",\n  \"urgency\": \"High\",\n  \"confidence\": 0.9\n}\n```"
              }
            ],
            "role": "assistant"
          }
        ]
      }
              
            

Segment 2: AI Output Processing

Purpose

The pupose of this segment is to capture the AI output, clean and parse it in clearly separated steps, and expose the final classification fields for downstream use. This approach preserves the original model response, ensures safe transformation of human-readable output into machine-usable data, and provides strong observability and debuggability throughout the workflow.

Segment 2 of the n8n workflow segment showing capture classification output, clean classification output, and parse classification output steps.

4. Capture Classification Output

Function

Large Language Models return text, even when that text is formatted as JSON. Capturing the raw model response separately preserves the original AI output for inspection, debugging, and auditing before any transformation or parsing occurs. The Capture Classification Output step records the raw response returned by the AI classification node without modification. This is handled using the Edit Fields (Set) node, which extracts the model's textual output and stores it in a dedicated internal field.

Input

At this stage, the workflow consumes the response produced by the OpenAI - Message a model node. The classification result is embedded within the model's response structure as text.

JSON
              
        [
          {
            "output": [
              {
                "id": "msg_06309f921a0d94f900694a9d15da448196ab919412ac67386d",
                "type": "message",
                "status": "completed",
                "content": [
                  {
                    "type": "output_text",
                    "annotations": [],
                    "logprobs": [],
                    "text": "```json\n{\n  \"category\": \"Billing\",\n  \"urgency\": \"High\",\n  \"confidence\": 0.9\n}\n```"
                  }
                ],
                "role": "assistant"
              }
            ]
          }
        ]
              
            

Output

The Edit Fields (Set) node extracts the model's response text and assigns it to a single internal field named classification_text. At this stage, the output remains a string, as indicated by the presence of Markdown code fences, the literal "json" label, and preserved line breaks intended for human readability rather than machine parsing. This step performs no cleaning, parsing, or validation. Its sole responsibility is to:

  • Preserve the original AI-generated output exactly as returned
  • Make the model response inspectable and debuggable
  • Isolate raw AI output from downstream transformations
All formatting cleanup and JSON parsing occur in subsequent steps.

JSON
              
        [
          {
            "classification_text": "```json\n{\n  \"category\": \"Billing\",\n  \"urgency\": \"High\",\n  \"confidence\": 0.9\n}\n```"
          }
        ]
            
            

5. Clean Classification Output

Function

The AI-generated response needs to be cleaned for safe machine parsing.

Input

At this stage, the workflow consumes the raw classification output stored in classification_text. Although the content visually resembles JSON, it remains a formatted string and is not yet safe for direct parsing.

JSON
              
      [
        {
          "classification_clean": "{\n  \"category\": \"Billing\",\n  \"urgency\": \"High\",\n  \"confidence\": 0.9\n}"
        }
      ]
              
            

Output

The Edit Fields (Set) node removes formatting artifacts and produces a cleaned string stored in classification_clean. At this point, the output remains a string, but its contents are now valid JSON text suitable for parsing. This step performs no parsing or validation. Its sole responsibility is to:

  • Preserve the original classification content
  • Remove Markdown code fences and formatting markers
  • Prepare the data for safe conversion into structured fields

JSON

      [
        {
          "classification_clean": "{\n  \"category\": \"Billing\",\n  \"urgency\": \"High\",\n  \"confidence\": 0.9\n}"
        }
      ]
              
            

6. Parse Classification Output

Function

The cleaned AI output must be converted from a JSON-formatted string into a structured object that can be reliably accessed by downstream workflow steps.

Input

At this stage, the workflow consumes the cleaned classification output stored in classification_clean. Although the content is now valid JSON text, it remains a string and cannot yet be accessed as individual fields.

JSON
              
      [
        {
          "classification_parsed": {
            "category": "Billing",
            "urgency": "High",
            "confidence": 0.9
          }
        }
      ]
              
            

Output

The Edit Fields (Set) node applies JSON parsing to convert the cleaned string into a structured object stored in classification_parsed. This step performs no interpretation or transformation of values. Its sole responsibility is to:

  • Convert valid JSON text into a machine-readable object
  • Expose structured classification fields for downstream access
  • Ensure type-safe consumption of AI-generated decisions

JSON

      [
        {
          "classification_parsed": {
            "category": "Billing",
            "urgency": "High",
            "confidence": 0.9
          }
        }
      ]
            

7. Extract Classification Fields

Function

The parsed classification object must be flattened into explicit, top-level fields so it can be easily stored, routed, and consumed by downstream systems.

Input

At this stage, the workflow consumes the structured classification object stored in classification_parsed. The classification data is now fully machine-readable but still nested within an object.

JSON
              
      [
        {
          "classification_parsed": {
            "category": "Billing",
            "urgency": "High",
            "confidence": 0.9
          }
        }
      ]
              
            

Output

The Edit Fields (Set) node extracts individual classification attributes and exposes them as top-level fields. This step performs no validation, inference, or decision-making. Its sole responsibility is to:

  • Flatten structured classification data into explicit fields
  • Make classification attributes directly accessible to downstream nodes
  • Prepare the data for persistence, routing, or response generation

JSON

      [
        {
          "category": "Billing",
          "urgency": "High",
          "confidence": 0.9
        }
      ]
            

Segment 3: Persistence & Response

Purpose

The purpose of this segment is to persist the finalized classification decision to storage and return a structured response to the originating system, ensuring classifications are recorded for auditing and downstream use.

Segment 2 of the n8n workflow segment showing capture classification output, clean classification output, and parse classification output steps.

8. Store Classification Decision

Function

The finalized classification decision must be persisted to durable storage to create an audit trail and enable downstream analysis, reporting, or automation.

Input

At this stage, the workflow consumes flattened, top-level classification fields representing the final AI-assisted decision.

JSON
              
      [
        {
          "category": "Billing",
          "urgency": "High",
          "confidence": 0.9
        }
      ]
              
            

Output

The PostgreSQL (Insert) node writes the classification decision to the database. This step performs no transformation or interpretation. Its sole responsibility is to:

  • Persist the final classification decision
  • Create a durable audit record of AI-assisted output
  • Enable analytics, monitoring, or future automation

JSON

      [
        {
          "id": "453",
          "created_at": "2025-02-30T12:12:12.018Z",
          "category": "Billing",
          "urgency": "High",
          "confidence": 0.9
        }
      ]
            

9. Respond to Webhook

Function

The workflow must return a structured, deterministic response to the originating requester, closing the request lifecycle in a predictable and transparent way.

Input

At this stage, the workflow consumes the finalized classification fields that represent the completed AI-assisted decision.

JSON
              
      [
        {
          "id": "453",
          "created_at": "2025-02-30T12:12:12.018Z",
          "category": "Billing",
          "urgency": "High",
          "confidence": 0.9
        }
      ]
              
            

Output

The Respond to Webhook node returns a minimal, structured acknowledgment to the requester. This step performs no logic, classification, or persistence. Its sole responsibility is to:

  • Confirm successful processing and storage of the request
  • Return a deterministic, machine-readable status
  • Close the request-response loop

JSON

      [
        {
            "status": "stored"
        }
      ]