Skip to main content
Feedback

OP AI LLM Intelligence - Partner connector

info

Partner connectors developed by our Tech Partners and published on the Boomi platform provide seamless integration solutions for customers. Customers will initiate support for these connectors through the Boomi Support Portal, where tickets will be triaged to the respective partner. The partner or their designated third party is responsible for ongoing support, including troubleshooting, bug fixes, and resolving related issues.

The OP AI LLM Intelligence — Partner Connector offers robust integration with various LLMs (Large Language Models), providing text embedding, chat completions, and model management functionalities. This connector facilitates seamless integration with a range of LLMs, allowing users to access a wealth of information effortlessly.

note

The documentation for this connector is provided by a partner.

Connector configuration

To configure the connector, set up the following two components:

  • OP AI LLM Intelligence connection: The connection contains all connection settings.
  • OP AI LLM Intelligence operation: Represents an action used to interact with the provider.

Prerequisites

The connector requires the following:

  • Access to an LLM Provider
  • Runtime running with Java 8 or Java 11

Tracked Properties

This connector has the following tracked properties that you can set or reference in various step parameters:

  • Request ID: The unique identifier for each request made through the connector.

  • Input Tokens: The request sent to the LLM API, containing the text or data for processing.

  • Input Cached Tokens: The number of tokens served from cache to optimize
    performance.

  • Output Tokens: The response received from the LLM API, which includes the output text or data.

  • Output Audio Tokens: The number of audio-specific tokens in the output
    response.

  • Output Reasoning Tokens: The number of tokens used for reasoning
    processes (for reasoning models like o1).

  • Finish Reason: The reason why the request was completed, such as completed, stopped, or error.

  • Tool Call ID: A unique identifier for each call made to a specific tool or function within the connector.

  • Tool Call Name: The name of the tool or function that was executed during the request.

  • Response Fingerprint: A hash or unique identifier generated based on the output tokens, useful for comparing responses.

  • Reasoning: An explanation or summary of the logic that led to generating the output tokens.

  • AWS Session ID: Session identifier for AWS Bedrock Agent interactions.

  • Content Type: The MIME type of the response content.

On this Page