Skip to main content
Feedback

Blueprint use cases

This topic provides a collection of Blueprint use cases and ready-to-use YAML templates designed to accelerate your API integration workflows. Each template includes a complete YAML configuration that can be adapted for your specific API.

Variable reference syntax

Use the following syntax patterns to inject dynamic data into your configuration:

ContextSyntaxDescriptionExample
Interface Parameter{param_name}References a value provided by user input.?status={status_filter}
Date Range Fields{param.start_date}Reference values from a date picker filed.from={dates.start_date}
Internal Variable{{%variable_name%}}References data stored from a preceding step./users/{{%user_id%}}
Loop Item{{%item_name%}}References the current scalar value in a loop./orders/{{%order_id%}}
External Data{ext.variable_name}References source data from the parent River.{ext.incoming_ids}
External Dict Property{{%{ext.dict.property}%}}References a specific property in an external dictionary.{{%{ext.config.region}%}}
Base URL{{%BASE_URL%}}References the connector's root service URL.{{%BASE_URL%}}/users
important

The {ext.} syntax can only be used in the first step of a workflow.

Template selection

Use the following table to identify the standardized template that best aligns with your integration requirements.

Integration scenarioRecommended template
Simple API connectivity with token-based authentication.Basic REST API.
Enterprise-grade connectivity. For example, Salesforce, HubSpot with OAuth 2.0.OAuth2 with pagination.
Modern APIs requiring opaque tokens for data navigation.Cursor-based pagination.
Processing data payloads passed from an upstream River.External variables loop.
Retrieving hierarchical data. For example, Accounts and Transactions.Parent-child pattern.
Managing custom authentication handshakes or session IDs.Sequential token generation.

Basic REST API template

The Basic REST API template provides a standard configuration for retrieving data from a service using standard HTTP methods and Bearer Token authentication.

Usage

Use this template for the following integration scenarios:

  • Token-based security: Connect to APIs that require personal access tokens or static API tokens.
  • Single-resource extraction: Fetch data from a specific endpoint. For example, /users or /invoices.
  • Time-bounded retrieval: Perform incremental syncs or filtered queries. For example, get all records from the last 30 days.

Capabilities

  • Bearer authentication: Securely manages encrypted credentials for API access.
  • Date filtering: Supports dynamic date range parameters for precise data targeting.
  • JSON extraction: Automatically parses response payloads using JSONPath transformation layers.

Configuration

Copy and customize the following YAML configuration to define your interface parameters and connection settings.

interface_parameters:
section:
source:
- name: "api_credentials"
type: "authentication"
auth_type: "bearer"
fields:
- name: "bearer_token"
type: "string"
is_encrypted: true
- name: "date_range"
type: "date_range"
period_type: "date"
format: "YYYY-mm-DD"
fields:
- name: "start_date"
value: ""
- name: "end_date"
value: ""

connector:
name: "My REST Connector"
base_url: "[https://api.example.com/v1](https://api.example.com/v1)"
default_headers:
Content-Type: "application/json"
default_retry_strategy:
"429":
max_attempts: 5
retry_interval: 60
"500":
max_attempts: 3
retry_interval: 10
variables_metadata:
final_output_file:
format: "json"
storage_name: "results_dir"
variables_storages:
- name: "results_dir"
type: "file_system"

steps:
- name: "Get Users"
description: "Fetch a list of users"
type: "rest"
http_method: "GET"
endpoint: "{{%BASE_URL%}}/users"
query_params:
start_date: "{date_range.start_date}"
end_date: "{date_range.end_date}"
variables_output:
- response_location: "data"
variable_name: "final_output_file"
variable_format: "json"
transformation_layers:
- type: "extract_json"
from_type: "json"
json_path: "$.data"
note

The type:rest property in the connector configuration is optional and defaults to rest. You can omit this field from your configuration.

Customization

Modify the following components to align the template with your target API.

ComponentDescriptionAction
Base URLThe root address of the API service.Replace base_url with your provider’s URL.
EndpointThe specific resource path to query.Update the endpoint string. For example, /customers.
JSON PathThe location of the data in the response.Adjust json_path to match the API response schema.
Query ParametersThe filters passed to the API.Update parameter keys to match the API documentation.

OAuth2 with pagination template

The OAuth2 with pagination template is designed for high-volume data extraction from enterprise-grade APIs. It includes built-in logic for secure credential exchange and iterative data fetching.

Usage

Use this template for the following integration scenarios:

  • Enterprise-grade security: Connect to platforms like Salesforce, HubSpot, or Microsoft that require OAuth2 Client Credentials.
  • Large dataset extraction: Efficiently retrieve records from APIs that limit the number of results per request.
  • Automated batch processing: Fetch data in sequential "pages" until all available records are processed.

Capabilities

  • OAuth2 Authentication: Handles the Client Credentials flow, including automatic token exchange.
  • Offset-Based Pagination: Automatically increments query parameters to navigate through large result sets.
  • Smart Break Conditions: Stops the execution loop once a specified JSON path returns empty, preventing infinite loops.

Configuration

Copy and customize the following YAML configuration to define your OAuth2 parameters and pagination logic.

interface_parameters:
section:
source:
- name: "api_credentials"
type: "authentication"
auth_type: "oauth2"
oauth2_settings:
grant_type: "client_credentials"
token_url: "[https://api.oauth-example.com/token](https://api.oauth-example.com/token)"
is_basic_auth: false
fields:
- name: "client_id"
type: "string"
- name: "client_secret"
type: "string"
is_encrypted: true

connector:
name: "OAuth2 Connector"
base_url: "[https://api.oauth-example.com](https://api.oauth-example.com)"
default_headers:
Accept: "application/json"
default_retry_strategy:
"429":
max_attempts: 5
retry_interval: 60
"500":
max_attempts: 3
retry_interval: 10
variables_metadata:
final_output_file:
format: "json"
storage_name: "results_dir"
variables_storages:
- name: "results_dir"
type: "file_system"

steps:
- name: "List Products"
description: "Get products with pagination"
type: "rest"
http_method: "GET"
endpoint: "{{%BASE_URL%}}/products"
pagination:
type: "offset"
location: "qs"
parameters:
- name: "offset"
value: 0
increment_by: 50
- name: "limit"
value: 50
break_conditions:
- name: "End of results"
condition:
type: "empty_json_path"
key_json_path: "$.items"
variables_output:
- response_location: "data"
variable_name: "final_output_file"
variable_format: "json"
transformation_layers:
- type: "extract_json"
from_type: "json"
json_path: "$.items"

OAuth2 key mapping

When an OAuth2 provider returns field names that do not align with their token responses, you must use the key_map property. This ensures the HTTP client or REST API connector can successfully parse the token response and manage session persistence.

Example: Non-standard token response

If an API returns the following JSON payload:

{
"auth_token": "abc123",
"expires": 3600
}

Customization

Modify the following components to align the template with your target enterprise API requirements.

ComponentDescriptionAction
Token URLThe authorization endpoint provided by your OAuth2 service.Replace token_url with your provider’s specific URI.
Auth StrategyThe method used to pass credentials during the token request.Set is_basic_auth to true if your provider requires a Basic Auth header.
Page SizeThe maximum number of records returned in a single response.Adjust increment_by and limit to match the API’s maximum allowed page size.
Key JSON PathThe specific location of the data array in the API response.Update key_json_path to match the items array in the response schema.

Common OAuth2 token URLs

Use the following reference table to locate the token endpoints for common enterprise service providers.

ProviderToken URL
Salesforcehttps://login.salesforce.com/services/oauth2/token.
HubSpothttps://api.hubapi.com/oauth/v1/token.
Microsofthttps://login.microsoftonline.com/{tenant}/oauth2/v2.0/token.
Googlehttps://oauth2.googleapis.com/token.
note

For Microsoft integrations, replace {tenant} with your specific Azure AD Directory ID or common for multi-tenant applications.

Cursor-based pagination template

The Cursor-based pagination template is optimized for modern APIs that use opaque tokens or cursors to navigate datasets. This method is more reliable than page-based numbering for rapidly changing data streams.

Usage

Use this template for the following integration scenarios:

  • Modern API integration: Connect to platforms like Slack, Stripe, or GitHub that use cursors for data consistency.
  • Dynamic dataset navigation: Fetch records where the underlying data might change during the request cycle. For example, event logs.
  • Streaming data extraction: Process event-based data that requires a next_page_token to retrieve the subsequent batch.

Capabilities

  • API Key Header Security: Manages secure, header-based authentication using encrypted keys.
  • Automated Token Extraction: Automatically captures the cursor from the response payload to use in the next request.
  • Sequential Stream Handling: Maintains the continuity of the data stream until the "End of results" condition is met.

Cursor pagination location strategies

The location property defines how the cursor token is transmitted to the target API. Select the location that matches your API provider's technical requirements.

LocationUse caseImplementation example
qsThe cursor is passed as a standard URL query parameter.?cursor=abc123&limit=100
urlThe API response provides a fully qualified URL for the subsequent page."next": "https://api.example.com/items?cursor=abc123"
bodyThe cursor is included within the JSON payload (typically for POST requests).{ "cursor": "abc123", "limit": 100 }
important

Limitation: Pagination parameters must be sent via Query String or Body. Request headers are not supported for input. To retrieve cursors, check the response headers using cursor_header_key.

Configuration

Copy and customize the following YAML configuration to define your API key parameters and cursor logic.

interface_parameters:
section:
source:
- name: "api_credentials"
type: "authentication"
auth_type: "api_key"
location: "header"
fields:
- name: "key_name"
type: "string"
value: "X-API-Key"
- name: "key_value"
type: "string"
is_encrypted: true

connector:
name: "Cursor API"
base_url: "[https://api.cursor-example.com/v2](https://api.cursor-example.com/v2)"
default_headers: {}
default_retry_strategy:
"429":
max_attempts: 5
retry_interval: 60
"500":
max_attempts: 3
retry_interval: 10
variables_metadata:
final_output_file:
format: "json"
storage_name: "results_dir"
variables_storages:
- name: "results_dir"
type: "file_system"

steps:
- name: "Fetch Events"
description: "Get events stream"
type: "rest"
http_method: "GET"
endpoint: "{{%BASE_URL%}}/events"
pagination:
type: "cursor"
location: "qs"
parameters:
- name: "cursor"
value: ""
cursor_json_path: "$.meta.next_cursor"
- name: "limit"
value: 100
break_conditions:
- name: "No more pages"
condition:
type: "empty_json_path"
key_json_path: "$.meta.next_cursor"
variables_output:
- response_location: "data"
variable_name: "final_output_file"
variable_format: "json"
transformation_layers:
- type: "extract_json"
from_type: "json"
json_path: "$.data"

Customization

Modify the following components to align the template with your specific cursor-based API requirements.

ComponentDescriptionAction
API Key NameThe header key expected by the provider.Update key_name. For example, Authorization or X-API-Key.
Cursor JSON PathThe response location of the next token.Set cursor_json_path to match the API's metadata field. For example, $.meta.next_token.
Cursor ParameterThe query string key for the token.Change the parameter name to match the API. For example, starting_after.

Common cursor patterns

Refer to this table for the specific parameter and response path requirements of popular APIs.

ProviderCursor ParameterCursor Path in Response
Slackcursor$.response_metadata.next_cursor
Stripestarting_after$.data[-1].id
GitHubLink headerRFC 8288 Link header
SalesforceN/A$.nextRecordsUrl
note

For GitHub, the cursor is typically found in the HTTP Link header rather than the JSON body. Ensure your configuration is set to parse headers if using the GitHub pattern.

External variables loop template

The external variables loop template is designed for data enrichment workflows where a list of values, such as IDs is passed from an upstream source to trigger a series of downstream API calls.

Usage

Use this template for the following integration scenarios:

  • River-to-River enrichment: Use a list of identifiers from one process to fetch detailed records in another.
  • Upstream data processing: Iterate through specific IDs provided by a source data file or database.
  • Batch record updates: Perform individual GET or POST requests for every item in a dataset.

Capabilities

  • External Variable Mapping: Utilizes the {ext.} syntax to ingest data from parent processes.
  • Iterative Loop Logic: Automatically cycles through arrays to execute nested REST steps for each record.
  • Fault Tolerance: Includes ignore_errors logic to ensure the entire batch doesn't fail if a single record encountered an issue.
  • Intelligent Retries: Built-in strategies to handle rate-limiting (429) and server errors (500).

Configuration

Copy and customize the following YAML configuration to implement multi-step enrichment logic.

interface_parameters:
section:
source:
- name: "api_credentials"
type: "authentication"
auth_type: "api_key"
location: "header"
fields:
- name: "key_name"
type: "string"
value: "Authorization"
- name: "key_value"
type: "string"
is_encrypted: true

connector:
name: "External Variables Connector"
base_url: "[https://api.example.com/v1](https://api.example.com/v1)"
default_headers:
Content-Type: "application/json"
default_retry_strategy:
"429":
max_attempts: 5
retry_interval: 60
"500":
max_attempts: 3
retry_interval: 10
variables_metadata:
final_output_file:
format: "json"
storage_name: "results_dir"
variables_storages:
- name: "results_dir"
type: "file_system"

steps:
- name: "Loop Through External Data"
description: "Iterate over IDs from source river"
type: "loop"
loop:
type: "data"
variable_name: "{ext.source_ids}"
item_name: "item_id"
add_to_results: true
ignore_errors: true
steps:
- name: "Fetch Item Details"
description: "Get details for each item from source"
type: "rest"
http_method: "GET"
endpoint: "{{%BASE_URL%}}/items/{{%item_id%}}"
retry_strategy:
429:
max_attempts: 3
retry_interval: 10
500:
max_attempts: 3
retry_interval: 10
variables_output:
- response_location: "data"
variable_name: "final_output_file"
variable_format: "json"
overwrite_storage: false
transformation_layers:
- type: "extract_json"
from_type: "json"
json_path: "$[*]"

Loop item property access

Unlike external dictionaries, loop items are accessed as scalar values. The connector does not support dot notation, for example, {{%item.property%}} for internal loop variables.

Correct pattern
# Step 1: Extract the specific values you need
variables_output:
- transformation_layers:
- type: "extract_json"
json_path: "$.accounts[*].id" # Extract just the IDs
# Step 2: Loop uses the simple value
loop:
variable_name: "account_ids"
item_name: "account_id" # Each item IS the ID
# Step 3: Reference the simple value
endpoint: "{{%BASE_URL%}}/accounts/{{%account_id%}}"
Incorrect pattern
# This will NOT work
loop:
variable_name: "accounts" # Full account objects
item_name: "account"
endpoint: "{{%BASE_URL%}}/accounts/{{%account.id%}}" # Dot notation fails
note

Dot notation ({{%{ext.dict.property}%}}) is only supported when accessing properties within external dictionaries provided by the source River.

Configuring loop storage behavior

When extracting data within a loop, use the overwrite_storage property to control how results are accumulated in the output file.

Storage settings

SettingBehaviorUse case
overwrite_storage: falseAppend data from each iteration to the output file.Use when collecting data from multiple sequential API calls.
overwrite_storage: trueReplace existing data with the results of the current iteration.Use when only the data from the final iteration is required.
important

Set the overwrite_storage property to false when configuring loops to ensure data from every iteration is preserved. Use the true setting only if your specific use case requires only the results from the final iteration.

Customization

Modify the following components to align the loop logic with your upstream data source.

ComponentDescriptionAction
External SourceThe variable name containing your input array.Replace {ext.source_ids} with your specific source variable.
Item AliasThe reference name for the current item in the loop.Define a descriptive item_name for use in the endpoint.
Resource EndpointThe specific API path for the sub-request.Update the endpoint to include the {{%item_id%}} variable.
Error HandlingDetermines workflow behavior upon request failure.Set ignore_errors to false to stop the loop on the first failure.

POST request with JSON body

The POST Request with JSON Body template is designed for sending data to an API endpoint using a standard JSON payload. This configuration is commonly used for record creation and triggering external system actions.

Usage

Use this template for the following integration scenarios:

  • Record Creation: Generate new entries in external systems via API.
  • Webhook Integration: Send data payloads to active webhooks.
  • System Triggers: Initiate automated actions on external platforms.
  • Endpoint Testing: Validate API connectivity and response handling.

Capabilities

  • POST HTTP Method: Supports standard data submission with JSON bodies.
  • Parameter Injection: Embeds interface parameters directly into the request body.
  • Status Code Validation: Verifies successful execution based on specific HTTP response codes.
  • JSON Extraction: Isolates specific data from the response for downstream use.

Configuration

Copy and customize the following YAML configuration to define your POST request parameters.

interface_parameters:
section:
source:
- name: project_id
type: string
value: ""

connector:
name: HttpBin API
type: rest
base_url: "[https://httpbun.com](https://httpbun.com)"
default_headers:
Content-Type: "application/json"
variables_metadata:
final_output_file:
format: json
storage_name: results_dir
variables_storages:
- name: results_dir
type: file_system

steps:
- name: PostToAnything
description: "Send POST request to /anything endpoint"
type: rest
http_method: POST
expected_status_codes: [200]
endpoint: "{{%BASE_URL%}}/any"
body: '{"message":"Hello {project_id}","test_data":{"string_field":"{project_id}","number_field":42,"boolean_field":true,"array_field":[1,2,3,4,5],"object_field":{"nested_key":"nested value","another_key":123}}}'
variables_output:
- response_location: data
variable_name: final_output_file
variable_format: json
transformation_layers:
- type: extract_json
from_type: json
json_path: $.json

Customization

Modify the following components to align the template with your specific POST request requirements.

ComponentDescriptionAction
BodyThe JSON payload structure.Replace with your specific API payload requirements.
project_idDynamic user-defined value.Use interface parameter names to inject values into the body.
Status CodesThe range of acceptable success codes.Add expected status codes. For example, [200, 201].
JSON PathThe target node for response extraction.Adjust to isolate the relevant portion of the API response.

Critical requirements

The {ext.} syntax is restricted to the first step of your workflow. If external data is required in subsequent steps, you must extract it in the initial step and store it as an internal variable.

Correct usage: The {ext.} syntax is used in the first step of the workflow.

steps:
- name: "Process External IDs" # First step - {ext.} allowed
type: "loop"
loop:
variable_name: "{ext.source_ids}"
item_name: "item_id"

Incorrect usage: Using {ext.} in any step following the initial step will result in a runtime failure.

steps:
- name: "Get Token" # First step
type: "rest"
# ...

- name: "Process External IDs" # Second step - {ext.} NOT allowed
type: "loop"
loop:
variable_name: "{ext.source_ids}" # FAILURE: Variable cannot be resolved

POST request with YAML object body

The POST request with YAML object body template allows you to define structured data using YAML notation. This format is automatically converted to JSON during execution and is ideal for managing complex, nested payloads.

Usage

Use this template for the following integration scenarios:

  • Complex nested payloads: Simplify the management of multi-layered data structures.
  • Array-heavy requests: Configure API calls that require lists of objects, such as chat messages or line items.
  • Configuration readability: Use YAML when you prefer structured indentation over dense JSON strings.

Capabilities

  • Bearer Token Authentication: Securely manages encrypted tokens for API access.
  • Native YAML Formatting: Define the request body as a structured YAML object instead of an escaped JSON string.
  • Resilience Logic: Implements a retry strategy to handle transient 500-level server errors.
  • Full Data Extraction: Captures the complete API response for downstream processing.

Configuration

Copy and customize the following YAML configuration to define your structured POST request.

interface_parameters:
section:
source:
- name: Auth
type: authentication
auth_type: bearer
fields:
- name: bearer_token
type: string
is_encrypted: true

connector:
name: OpenAI
base_url: [https://api.openai.com](https://api.openai.com)
default_headers:
Content-Type: "application/json"
variables_metadata:
final_output_file:
format: json
storage_name: results_dir
variables_storages:
- name: results_dir
type: file_system

steps:
- name: Chat Completion
description: Send chat completion request to OpenAI
type: rest
http_method: POST
endpoint: "{{%BASE_URL%}}/v1/chat/completions"
body:
model: gpt-4o
messages:
- role: developer
content: You are a helpful assistant.
- role: user
content: Hello!
retry_strategy:
"500":
max_attempts: 5
retry_interval: 10
variables_output:
- response_location: data
variable_name: final_output_file
variable_format: json
transformation_layers:
- type: extract_json
from_type: json
json_path: $

Body format comparison

Choose the format that best fits your payload complexity and readability requirements.

FormatRecommended Use CaseSyntax Example
YAML ObjectComplex nested structures or arrays of objects.body: followed by indented YAML lines.
JSON StringSimple payloads or high-frequency parameter injection.body: '{"key": "{param}"}'

Key points

  • YAML object body: Define the body as a structured YAML object; the system converts it to valid JSON at runtime.
  • Arrays in YAML: Use the - prefix for array items. For example, the messages list in the configuration above).
  • No quotes needed: YAML objects do not require the same character escaping as standard JSON strings.
  • Parameter injection: Use the {param_name} syntax within string values to maintain dynamic functionality.

Advanced use case: Parent-child data fetching

The Parent-Child Data Fetching template enables the retrieval of hierarchical data structures. This workflow first extracts a collection of parent identifiers and then executes iterative requests to capture granular detail records for each parent.

Usage

Use this template for the following integration scenarios:

  • Organizational Mapping: Fetch a list of departments (parent) and then retrieve all associated employees (child).
  • Project Management: Sync project headers (parent) and then extract all related tasks or milestones (child).
  • Financial Auditing: Retrieve account summaries (parent) followed by individual transaction line items (child).
  • Hierarchical Synchronization: Any scenario where detail records are nested under a unique resource ID.

Capabilities

  • Multi-Step Orchestration: Chains request steps together to build a complete data profile.
  • Response Mapping: Uses JSONPath to isolate specific keys from the parent response to drive the downstream loop.
  • Persistent Storage: Appends data from multiple iterations into a single output file using non-overwriting storage logic.
  • Advanced Pagination: Supports different pagination types (Page vs. Offset) across the parent and child steps.

Configuration

Copy and customize the following YAML configuration to implement hierarchical data enrichment.

interface_parameters:
section:
source:
- name: "domain"
type: "string"
value: ""
- name: "api_credentials"
type: "authentication"
auth_type: "basic_http"
fields:
- name: "username"
type: "string"
- name: "password"
type: "string"
is_encrypted: true

connector:
name: "Parent-Child Connector"
base_url: "https://{domain}[.api.example.com/v2](https://.api.example.com/v2)"
default_headers:
Content-Type: "application/json"
Accept: "application/json"
default_retry_strategy:
"429":
max_attempts: 5
retry_interval: 60
"500":
max_attempts: 3
retry_interval: 10
variables_metadata:
final_output_file:
format: "json"
storage_name: "results_dir"
variables_storages:
- name: "results_dir"
type: "file_system"

steps:
# Step 1: Get all parent record IDs
- name: "Get Project IDs"
description: "Fetch list of all project IDs"
type: "rest"
http_method: "GET"
endpoint: "{{%BASE_URL%}}/projects"
pagination:
type: "page"
location: "qs"
parameters:
- name: "page"
value: 1
increment_by: 1
- name: "per_page"
value: 100
break_conditions:
- name: "No more projects"
condition:
type: "empty_json_path"
key_json_path: "$.projects"
variables_output:
- response_location: "data"
variable_name: "project_ids"
variable_format: "json"
transformation_layers:
- type: "extract_json"
from_type: "json"
json_path: "$.projects[*].id"

# Step 2: Loop through each project and get tasks
- name: "Get Tasks for Each Project"
description: "Fetch all tasks for each project"
type: "loop"
loop:
type: "data"
variable_name: "project_ids"
item_name: "project_id"
add_to_results: true
ignore_errors: true
steps:
- name: "Fetch Project Tasks"
description: "Get all tasks for this project"
type: "rest"
http_method: "GET"
endpoint: "{{%BASE_URL%}}/projects/{{%project_id%}}/tasks"
pagination:
type: "offset"
location: "qs"
parameters:
- name: "offset"
value: 0
increment_by: 50
- name: "limit"
value: 50
break_conditions:
- name: "No more tasks"
condition:
type: "page_size_break"
page_size_param_name: "limit"
items_json_path: "$.tasks"
variables_output:
- response_location: "data"
variable_name: "final_output_file"
variable_format: "json"
overwrite_storage: false
transformation_layers:
- type: "extract_json"
from_type: "json"
json_path: "$.tasks[*]"

Key points

Follow these requirements to ensure successful hierarchical data retrieval:

  • Targeted ID extraction: In Step 1, use a specific JSONPath, for example, $.projects[*].id to extract only the identifiers required for the loop.
  • Variable referencing: Ensure the variable_name in the loop configuration exactly matches the variable_name defined in the parent step's output.
  • Scalar item logic: Define the item_name as the identifier itself, for example, project_id rather than the whole object to simplify endpoint pathing.
  • Data accumulation: Set overwrite_storage: false in the child step to prevent subsequent iterations from deleting data collected in earlier loops.
  • Error resilience: Set ignore_errors: true to ensure the overall process continues even if specific parent records return an error for their child records.

Advanced use case: Sequential token generation

The Sequential token generation template is designed for APIs that require a dynamic session token or temporary credential before allowing data access. This template orchestrates a multi-step handshake where the output of the initial request defines the security context for all subsequent calls.

Usage

Use this template for the following integration scenarios:

  • Session-based authentication: Connect to legacy or custom APIs that issue a SessionID or AuthToken via a specific login endpoint.
  • Multi-step handshakes: Execute a "POST to authorize" followed by a "GET to retrieve" workflow.
  • Temporary credential management: Handle APIs that provide short-lived tokens that must be refreshed per execution.

Capabilities

  • Dynamic Credential Capture: Automatically extracts a token from a response payload and stores it as a runtime variable.
  • Header Injection: Dynamically maps the generated token into the HTTP headers of subsequent requests.
  • Stateful Orchestration: Maintains the session context throughout the entire process life cycle.

Configuration

Copy and customize the following YAML configuration to implement session-based data extraction.

interface_parameters:
section:
source:
- name: "api_credentials"
type: "authentication"
auth_type: "basic_http"
fields:
- name: "username"
type: "string"
- name: "password"
type: "string"
is_encrypted: true

connector:
name: "Session-Based Connector"
base_url: "[https://api.example.com](https://api.example.com)"
default_headers:
Content-Type: "application/json"
default_retry_strategy:
"429":
max_attempts: 5
retry_interval: 60
"500":
max_attempts: 3
retry_interval: 10
variables_metadata:
final_output_file:
format: "json"
storage_name: "results_dir"
variables_storages:
- name: "results_dir"
type: "file_system"

steps:
# Step 1: Get session token
- name: "Generate Session Token"
description: "Authenticate and get session token"
type: "rest"
http_method: "POST"
endpoint: "{{%BASE_URL%}}/auth/session"
body:
grant_type: "client_credentials"
variables_output:
- response_location: "data"
variable_name: "session_token"
variable_format: "json"
transformation_layers:
- type: "extract_json"
from_type: "json"
json_path: "$.session.token"

# Step 2: Use session token to fetch data
- name: "Fetch Data with Session"
description: "Get data using the session token"
type: "rest"
http_method: "GET"
endpoint: "{{%BASE_URL%}}/data"
headers:
X-Session-Token: "{{%session_token%}}"
pagination:
type: "page"
location: "qs"
parameters:
- name: "page"
value: 1
increment_by: 1
- name: "size"
value: 100
break_conditions:
- name: "No more data"
condition:
type: "empty_json_path"
key_json_path: "$.records"
variables_output:
- response_location: "data"
variable_name: "final_output_file"
variable_format: "json"
transformation_layers:
- type: "extract_json"
from_type: "json"
json_path: "$.records[*]"

Key points

Follow these requirements to ensure successful token propagation between steps:

  • Isolate the token value: In the first step, use a precise json_path, for example, $.session.token to extract only the token string rather than the entire JSON object.
  • Variable naming: Assign a descriptive variable_name, for example, session_token to ensure the value is stored in the execution context.
  • Syntax for injection: Reference the token in subsequent steps using the {{%variable_name%}} syntax within headers, query parameters, or endpoint paths.
  • Lifecycle persistence: Remember that data extracted in any step is stored in process memory and remains accessible for all following steps in the sequence.

Pre-deployment checklist

Before deploying your connector, perform the following verification steps to ensure security, performance, and data integrity.

  1. Security and authentication:
    • Data encryption: Verify that all sensitive fields, for example, client_secret, password, key_value are configured with is_encrypted: true.
    • Credential Validation: For OAuth2 flows, confirm that the Token URL and Grant Type align with your provider's documentation.
  2. Pagination logic:
    • Break conditions: Ensure at least one break condition is configured to prevent infinite loops.
    • Volume limits: Verify that the page size matches the API’s maximum allowed limit.
    • Offset accuracy: For offset-based pagination, ensure the increment_by value is equal to the page limit.
  3. Resilience and error handling:
  • Retry strategies: Configure specific retry logic for transient errors, including 429 (Rate limit), 500 (Internal server error), and 502/503 (Gateway issues).
  • Loop tolerance: Set ignore_errors to true for non-critical loops to prevent a single record failure from stopping the entire process.
  1. Data extraction accuracy:
    • JSONPath validation: Test your JSONPath expressions against live API response payloads to ensure accurate extraction.
    • Storage logic: For iterative loops, confirm overwrite_storage is set to false to prevent data loss between cycles.
  2. Variable management:
    • Naming conventions: Use clear, meaningful variable names to improve process maintainability.
    • External variable syntax: Ensure the {ext.} prefix is utilized exclusively in the first step of the workflow.
    • Loop integrity: Verify that the defined item_name matches the variable reference used in your endpoint paths.
On this Page