Complete Reference

Workflows & Nodes

Master workflow design and node catalog

Living Documentation: This page is actively maintained and updated as new node types and features are released. Last updated: December 2024

Table of Contents
  1. Node Catalog at a Glance
  2. Node Reference
    1. AI Operation Nodes
    2. Control Flow Nodes
    3. Data Nodes
  3. Choosing the Right Node
  4. Building Workflows
  5. Additional Resources

Node Catalog at a Glance

All Available Nodes

Quick overview of all node types in Agent Builder. Click any node to jump to its detailed documentation below.

LLM Node

AI-powered text generation, analysis, and vision processing

type: llm AI Operations

Decision Tree Node

AI-powered classification and intelligent routing

type: decision_tree AI Operations

Return Response Node

Final output delivery and workflow termination

type: flat_response Control Flow

If/Else Node

Conditional branching based on boolean logic

type: if_else Control Flow

Loop Node

Batch processing and iteration over collections

type: loop Control Flow

Save Variable Node

Store data for use in downstream nodes

type: variable_assignment Data

Transform Variables Node

Combine, extract, and reshape data

type: variable_transform Data

Node Reference

Complete Node Catalog

Nodes are the building blocks of Agent Builder workflows. Each node type performs a specific function, from calling AI models to controlling execution flow and managing data. Understanding these nodes is essential to designing effective workflows.

Nodes are organized into three functional categories based on their primary purpose. Click any category below to explore the available node types.

AI Operation Nodes

AI Operation Nodes

Leverage AI models for intelligent processing, content generation, and decision-making.

LLM Node

AI-powered text generation and analysis with multimodal capabilities

  • Text processing & generation
  • Document & image analysis
  • RAG & web search support
  • Data analysis (CSV/Excel)

Decision Tree Node

AI classification and intelligent multi-path routing

  • Intent detection
  • Content categorization
  • Confidence-based routing
  • Multi-category classification
Quick Feature Comparison
Feature LLM Node Decision Tree Node
File Attachments Yes (max 5 files) No
RAG Dataset Support Yes No
Live Web Search Yes No
MCP Tools Yes No
Multiple Output Paths No (single output) Yes (per category)
Classification Methods N/A LLM, Keyword, Regex
Best For Open-ended AI tasks Structured routing & classification

LLM Node

Calls a language model to generate text-based responses from prompts and file inputs. This is one of the most powerful and versatile nodes in Agent Builder.

type: llm

Primary Use Cases:

  • Document analysis and extraction
  • Question answering and information retrieval
  • Content generation and summarization
  • Natural language processing tasks
Configuration & Usage

Key Parameters:

  • Label: Display name for this node in the workflow canvas
  • MCP Tools (Optional): Enable Model Context Protocol tools for this LLM call. Format: {server_id: [[tool_name, require_approval], ...]}
  • Attached Files (max 5): File variables to attach (e.g., {{input.source_data}} or input.document). CSV/Excel files enable data analysis, images enable vision
  • Terminal Node: If true, return this LLM's response directly to user without further processing (toggle)
  • Live Web Search: Enable live web search to answer questions about recent events (dropdown: Live/disabled)
  • Max Output Tokens: Maximum tokens in the response. Model-specific limits apply
  • Model: Select the language model to use (e.g., Google Anthropic Claude 4.5 Opus, GPT-4o-mini) - Required field
  • Persona (Optional): Select a persona to guide the LLM's behavior and tone. Defaults to persona 1 if not specified
  • Prompt: Double-click to open editor. Use {{variable}} to reference previous node outputs. Example: Process: {{input.message}} - Required field
  • RAG Datasets (Optional): Select datasets for retrieval-augmented generation to provide additional context
  • System Prompt (Optional): Double-click to open editor. Instructions that set the behavior of the model
  • Temperature: Controls randomness. Lower = more deterministic, Higher = more creative

Input/Output:

  • Input: Text prompts, variables from previous nodes, file attachments
  • Output: [node_name].response containing the generated text
Performance: Use the Terminal Node toggle to return LLM responses directly to users without additional processing - this skips the need for a separate Return Response node.

Example Scenario:

Extract key information from an RFP document by providing the document as a file variable and a prompt like "Analyze this RFP and extract: 1) Project scope, 2) Budget, 3) Timeline, 4) Key requirements"

Learn More:

See the First Workflow Tutorial for a complete example of using LLM nodes to extract and analyze RFP documents.

Decision Tree Node

Classifies user input into predefined categories using AI-powered classification. Ideal for routing, intent detection, and categorization tasks.

type: decision_tree

Primary Use Cases:

  • Intent detection and request routing
  • Sentiment analysis and classification
  • Category assignment for incoming data
  • Multi-path workflow branching based on content
Configuration & Usage

Key Parameters:

  • Label: Display name for this node in the workflow canvas
  • Categories: Click "Edit" to configure categories. Each category has:
    • Description: Help the LLM understand this category
    • Keywords (for keyword method): Keywords to match for classification
    • Category Name: Name of the category (used for output handles)
    • Regex Pattern (for regex method): Regular expression pattern (e.g., ^urgent|URGENT$)
  • Classification Method: Choose classification approach - Required field
    • llm: AI-based classification using language models
    • keyword: Simple pattern matching using keywords
    • regex: Regular expression-based pattern matching
  • Confidence Threshold: Minimum confidence to accept classification (0-1). Default: 0.7
  • Default Category: Category to use if no match found - Required field
  • Node Description: Double-click to open editor. A general description of the classification task to help the LLM
  • Input Variable: Variable containing text to classify (e.g., input.message, llm_1.response) - Required field
  • LLM Model (for llm method): Model to use for LLM-based classification (e.g., Google Anthropic Claude 4.5 Opus)

Input/Output:

  • Input: Text input variable to classify
  • Output: Classification result (category name) and confidence score, with separate output handles for each category

Example Scenario:

Route customer support tickets by classifying them into categories like "Technical Issue", "Billing Question", "Feature Request", or "General Inquiry"

Control Flow Nodes

Control Flow Nodes

Control the execution path of your workflow with conditionals, loops, and terminal responses.

Return Response Node

Final output delivery and workflow termination

  • Always ends execution
  • Template-based responses
  • JSON or text output
  • Metadata support

If/Else Node

Binary conditional branching

  • Boolean expressions
  • Two execution paths
  • Validation & error handling
  • Dynamic workflow logic

Loop Node

Batch processing and iteration

  • Array iteration
  • CSV file processing
  • Configurable limits
  • Row/column modes
Quick Feature Comparison
Feature Return Response If/Else Loop
Terminates Workflow Always No No
Output Paths None (terminal) 2 (if/else) 1 (complete)
Template Support Yes No No
Conditional Logic No Boolean expressions No
Iteration Support No No Yes (arrays, CSV)
Max Iterations N/A N/A 1-1000 (default: 100)
Best For Final user output Validation & branching Batch processing

Return Response Node

Returns a predefined response and terminates the workflow. This node ALWAYS ends workflow execution and is used to deliver the final output to users.

type: flat_response

Primary Use Cases:

  • Final answer delivery to users
  • Workflow termination with custom messages
  • Error message delivery
Configuration & Usage

Key Parameters:

  • Label: Display name for this node in the workflow canvas
  • Metadata (Optional): Additional metadata to include with the response (max 10KB). Click "Edit JSON" to configure
  • Response Template: Double-click to open editor. Use {{variable}} to include data from previous nodes. Example: Hello {{user.name}}! - Required field
  • Response Type: Format of the response (dropdown) - options: text, json

Input/Output:

  • Input: Variables from previous nodes to include in the response
  • Output: Final workflow output delivered to the user

Special Note:

This node is ALWAYS terminal - it ends workflow execution. No nodes after a Return Response node will execute.

Example Scenario:

After analyzing a document and extracting metadata, use a Return Response node with a template like: "Analysis Complete! Summary: {{summary}}, Key Points: {{key_points}}"

Learn More:

See the First Workflow Tutorial for examples of using Return Response nodes to format final outputs.

If/Else Node

Branches workflow execution based on a boolean condition. Use this node to create conditional logic and handle different scenarios.

type: if_else

Primary Use Cases:

  • Validation checks and error handling
  • Conditional processing based on data values
  • Dynamic workflow paths
  • Quality gates and approval flows
Configuration & Usage

Key Parameters:

  • Label: Display name for this node in the workflow canvas
  • Condition: Boolean expression using variables - Required field. Examples:
    • score > 80 - Numeric comparison
    • category == 'urgent' - String equality
    • (x > 5 and y < 10) or z == 'yes' - Complex logic

Input/Output:

  • Input: Variables referenced in the condition expression
  • Output: Two execution paths - "if" handle (condition true) and "else" handle (condition false)
Syntax Note: Reference variables directly without {{}} in conditions. Use score > 80 not {{score}} > 80. Supports operators: >, <, ==, !=, and, or.

Example Scenario:

Check if a confidence score is above 0.8: confidence > 0.8. If true, proceed with automated processing via the "if" output. If false, route to human review via the "else" output.

Loop Node

Iterates over data collections, executing downstream nodes for each item. Essential for batch processing and multi-item workflows. Supports both array iteration and CSV file processing.

type: loop

Primary Use Cases:

  • Batch processing of multiple documents
  • CSV file processing row-by-row
  • Multi-item analysis and aggregation
  • Repetitive operations on collections
Configuration & Usage

Key Parameters:

  • Label: Display name for this node in the workflow canvas
  • CSV Delimiter: Column separator for CSV data (default: ",")
  • CSV Has Headers: Whether first row contains column headers (toggle)
  • CSV Iteration Mode: How to iterate over CSV (dropdown) - options: rows, columns, specific row, or specific column
  • CSV Row/Column Reference: For row_ref/col_ref: column name, row index, or variable like {{llm_1.column}}
  • Max Iterations: Maximum items to process (1-1000). Default: 100
  • Source Type: Type of data to parse (dropdown: auto) - 'auto' will detect based on content
  • Source Variable: Variable containing data to iterate (e.g., input.file, llm_1.response) - Required field

Input/Output:

  • Input: Array, collection, or CSV file to iterate over
  • Output: "complete" handle triggers after all iterations finish, with aggregated results available
Performance Tip: Set max_iterations carefully to balance thoroughness and execution time. Default is 100 items. For large datasets, consider filtering or sampling data before the loop.

Example Scenario:

Process a CSV file containing 50 customer reviews by looping over each row and extracting sentiment, key themes, and overall rating. The loop will process each row up to the max_iterations limit (100).

Data Nodes

Data Nodes

Manage, transform, and pass data between nodes in your workflow.

Save Variable Node

Store node output for downstream use

  • Simple assignment
  • Named variables
  • Workflow-wide access
  • Data persistence

Transform Variables Node

Combine, extract, and reshape data

  • Template interpolation
  • JSON extraction
  • Array operations
  • Type casting
Quick Feature Comparison
Feature Save Variable Transform Variables
Primary Purpose Simple storage Data manipulation
Operations Assignment only Template, extract, append, concat
Template Support No Yes ({{variable}} syntax)
JSON Extraction No Yes (JSON path)
Array Operations No Yes (append, concat)
Type Casting No Yes (string, number, boolean, etc.)
Multiple Operations One per node Multiple transformations per node
Best For Storing raw outputs Data formatting & reshaping

Save Variable Node

Saves node output to a named variable for use in downstream nodes. Essential for passing data between workflow stages.

type: variable_assignment

Primary Use Cases:

  • Intermediate result storage
  • Data passing between workflow stages
  • Creating reusable data references
  • Building structured data objects
Configuration & Usage

Key Parameters:

  • Label: Display name for this node in the workflow canvas
  • Source: Node output to save (e.g., classifier_1.category, llm_1.response) - Required field
  • Variable Name: Name for the new variable (letters, numbers, underscores only). Used to reference this value in downstream nodes - Required field

Input/Output:

  • Input: Data from previous node's output
  • Output: Stored variable accessible throughout the workflow using {{variable_name}} syntax

Example Scenario:

After an LLM node extracts metadata from a document, use a Save Variable node to store it as document_metadata, then reference it later with {{document_metadata}}

Learn More:

See the First Workflow Tutorial for practical examples of storing and referencing variables in multi-stage workflows.

Transform Variables Node

Combines, extracts, or transforms data using template syntax. Perfect for data formatting, string manipulation, and JSON construction.

type: variable_transform

Primary Use Cases:

  • String concatenation and formatting
  • JSON object construction
  • Data type conversions
  • Template-based data transformation
Configuration & Usage

Key Parameters:

  • Label: Display name for this node in the workflow canvas
  • Transformations: Click "Edit" to configure transformation operations - Required field. Each transformation has:
    • Operation: Choose transformation type (dropdown) - Required
      • template: Use {{variable}} syntax for variable interpolation
      • append: Add value to end of target array
      • extract_json: Extract data from JSON using JSON path
      • concat_array: Concatenate multiple arrays together
    • Template (for template op): Template string using {{variable}} syntax
    • Value to Append (for append op): Value to add to array
    • Target Array (for append op): Array to append value to
    • Source Variable (for extract_json op): Variable containing JSON data
    • JSON Path (for extract_json op): Path to extract from JSON
    • Default Value (for extract_json op): Fallback if path not found
    • Arrays to Concatenate (for concat_array op): Arrays to combine
    • Type Cast (Optional): Convert output type - options: string, number, boolean, object, array
    • Variable Name: Name for the transformed result - Required

Input/Output:

  • Input: Variables from previous nodes referenced in transformations
  • Output: New variables created by each transformation, accessible as {{variable_name}}
Template Syntax: Use {{variable}} syntax in templates to reference any previous node output. Chain multiple transformations in a single node for complex data reshaping.

Example Scenarios:

  • Template: Create formatted string: Summary: {{llm_1.response}}, Status: Complete
  • Extract JSON: Extract $.user.email from JSON response to get user email
  • Append: Add new item to existing results array for aggregation
  • Concat Array: Combine multiple result arrays into single output

Choosing the Right Node

Node Selection Decision Guide

Not sure which node to use? This decision guide helps you quickly identify the right node for your specific task.

If you need to... Use this node Why
Generate text, analyze documents, or process with AI LLM Node Most versatile AI node with support for vision, RAG, file attachments, and web search
Classify input into categories and route accordingly Decision Tree Node AI-powered multi-category classification with confidence scoring and multiple routing paths
Return final results to the user Return Response Node Always terminates workflow and delivers formatted output
Create conditional logic based on values If/Else Node Binary branching with boolean expressions for validation and decision-making
Process multiple items or iterate over a collection Loop Node Batch processing for arrays, CSV files, and collections with configurable iteration modes
Store data to reference later in the workflow Save Variable Node Simple variable assignment for passing data between workflow stages
Combine, format, or reshape data Transform Variables Node Powerful data manipulation with template syntax, JSON extraction, and array operations
Pro Tip: Most workflows use a combination of nodes. Start with your core processing needs (usually an LLM or Decision Tree), then add control flow and data nodes as needed.

Building Workflows

Workflow Design Patterns

Note: Refer to the Node Reference above for detailed documentation on each node type.

Effective workflows combine multiple nodes to create powerful automated processes. Understanding how to select and connect nodes is key to building successful workflows.

Node Selection Strategies

When designing your workflow, consider these questions:

  • What type of processing do you need? Use AI Operation nodes for intelligent analysis, Control Flow nodes for logic, and Data nodes for storage
  • Do you need conditional logic? Use If/Else nodes for binary decisions, Decision Tree nodes for multi-category classification
  • Are you processing multiple items? Use Loop nodes for batch processing and iteration
  • How will data flow between stages? Use Save Variable nodes to pass data and Transform Variables nodes to reshape it

Workflow Design Process

1
Identify Need
2
Select Nodes
3
Connect Flow
4
Configure
5
Test & Refine

Common Workflow Patterns

Sequential Processing

Chain nodes together for step-by-step operations

  • Linear execution flow
  • Data passes between stages
  • Example: Input → LLM → Save → Response

Conditional Branching

Create different execution paths based on conditions

  • If/Else for binary decisions
  • Decision Tree for multi-path routing
  • Example: Classify → Route A or B

Iterative Processing

Process collections of items systematically

  • Loop nodes for batch operations
  • Process each item individually
  • Example: Loop → LLM → Aggregate

Multi-Stage Analysis

Combine multiple AI operations for complex tasks

  • Multiple LLM nodes
  • Variable storage between stages
  • Example: Extract → Analyze → Summarize

Best Practices

Start Simple: Build your workflow incrementally. Begin with the core functionality and add complexity only as needed. Test each stage before connecting to the next.
Naming Convention: Use descriptive, meaningful variable names that clearly indicate their purpose. Good: customer_email, extracted_summary. Avoid: var1, temp.
Error Handling: Consider edge cases and add error handling with If/Else nodes. Validate data before expensive operations and provide fallback paths for unexpected inputs.

Performance Optimization

Minimize Variables: Only save data you'll actually reuse. Unnecessary variable assignments add complexity without benefit.
Early Filtering: Place conditional checks and validation early in your workflow to avoid unnecessary processing of invalid data.
Prepare Data First: Use Transform Variables nodes to format and prepare data before expensive LLM calls. Clean, well-formatted inputs produce better results.

Next Steps:

Ready to build your first workflow? Check out the First Workflow Tutorial for step-by-step guidance on creating an RFP analysis workflow.


Additional Resources


Back to top

Copyright © 2025 Ask Sage Inc. All Rights Reserved.