Skip to main content
Feedback

OpenAI operation

The OpenAI connector Operations lets you discover and Invoke AI models on OpenAI. You can use it to generate content, create images, generate embeddings, or structured outputs.

Browsing

When you import an operation or browse objects, the connector presents the available object types based on the selected operation:

  • Generate Operation: Lists available sub-actions (Text, Images, Embeddings, Structured Output)
  • Converse Operation: Lists the Converse object type (Converse)

Each object type has corresponding request and response profiles that are imported during the browse process.

Generate operation

Use Generate operation to invoke OpenAI models for one-time content generation requests. This operation supports four sub-actions (object types):

  • Generate Text
  • Generate Image
  • Generate Embedding
  • Generate Structured Output

Generate text

Produces AI-generated text (answers, summaries, creative content, code, or responses) from text prompts using GPT models.

Before you begin

  • Ensure your OpenAI account has access to the desired model.
  • Prepare JSON that matches the text generation schema.
  • Choose an appropriate model (for example, gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, etc.)

Configure

  1. Create a new Generate operation or click Import Operation.
  2. Select Action Type: Generate Text.
  3. Click Next to import Request and Response profiles.
  4. Configure mapping for your input prompt and model selection.
Request Schema (GENERATE TEXT)
{
"model": "gpt-4o-mini",
"input": "Explain quantum computing in simple terms",
"instructions": "You are a helpful science educator. Explain concepts clearly.",
"temperature": 0.7,
"max_output_tokens": 500,
"top_p": 1.0
}
Response Schema (GENERATE TEXT)
{
"id": "resp_abc123",
"object": "response",
"created_at": 1699000000,
"model": "gpt-4o-mini",
"status": "completed",
"output": [
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "Quantum computing is a type of computing that uses..."
}
]
}
],
"usage": {
"input_tokens": 25,
"output_tokens": 150,
"total_tokens": 175
}
}

Generate image

Creates images from text prompts using OpenAI's image generation models.

Before you begin

  • Ensure your account has access to image generation models.
  • Choose an image-capable model (e.g., gpt-image-1, dall-e-2).
  • Prepare JSON that matches the image generation schema.

Configure

  1. Create a new Generate operation or click Import Operation.
  2. Select Action Type: Generate Image
  3. Click Next to import Request and Response profiles.
  4. Configure mapping for your image prompt and parameters.
Request Schema (GENERATE IMAGE)
{
"prompt": "A futuristic city skyline at sunset with flying cars",
"model": "gpt-image-1",
"n": 1,
"size": "1024x1024",
"quality": "high",
"output_format": "png",
"background": "auto"
}
Response Schema (GENERATE IMAGE)
{
"created": 1699000000,
"data": [
{
"b64_json": "iVBORw0KGgoAAAANSUhEUgAA..."
}
],
"background": "opaque",
"output_format": "png",
"quality": "high",
"size": "1024x1024",
"usage": {
"input_tokens": 50,
"output_tokens": 1000,
"total_tokens": 1050
}
}

Generate embedding

Returns numerical vector embeddings for input text, useful for semantic search, clustering, and similarity comparisons.

Before you begin

  • Ensure your account has access to embedding models.
  • Choose an embedding model (for example, text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002).
  • Prepare JSON that matches the embedding schema.

Configure

  1. Create a new Generate operation or click Import Operation.
  2. Select Action Type: Generate Embedding
  3. Click Next to import Request and Response profiles.
  4. Configure mapping for your input text.
Request Schema (GENERATE EMBEDDING)
{
"model": "text-embedding-3-small",
"input": ["The quick brown fox jumps over the lazy dog"],
"encoding_format": "float",
"dimensions": 1536
}
Response Schema (GENERATE EMBEDDING)
{
"object": "embedding",
"index": 0,
"embedding": [
-0.006929283,
-0.005336422,
0.024265893,
...
]
}

Generate structured output

Generates responses conforming to a user-defined JSON schema, ensuring consistent and parseable output formats.

Before you begin

  • Define your desired output JSON schema.
  • Prepare the complete text parameter JSON including format configuration.
    Expected format:
	{
"format": {
"type": "json_schema",
"name": "name of the schema",
"Schema":{
// Customer's schema
},
"strict": true
}
}
  • Choose a compatible model and input.

Configure

  1. Create a new Generate operation or click Import Operation.
  2. Select Action Type: Generate Structured Output
  3. In the Text Parameter (JSON) field, paste your complete text parameter JSON.
  4. Click Next to import dynamically generated Request and Response profiles.
  5. Configure mapping for your model, input and other parameters
note

The text parameter with your schema is automatically injected by the connector based on your design-time configuration, therefore it is not required to be passed at runtime in the process.

Request Schema (GENERATE STRUCTURED OUTPUT)
{
"model": "gpt-4o-mini",
"input": [
{
"role": "user",
"content": "Extract product information from: iPhone 15 Pro - $999, Electronics, Available now"
}
],
"instructions": "Extract structured product information from the text.",
"temperature": 0
}
Response Schema (GENERATE STRUCTURED OUTPUT)
{
"id": "resp_xyz789",
"object": "response",
"created_at": 1699000000,
"model": "gpt-4o-mini",
"status": "completed",
"structured_output": [
{
"product_name": "iPhone 15 Pro",
"price": 999,
"category": "electronics",
"in_stock": true
}

],
"usage": {
"input_tokens": 50,
"output_tokens": 30,
"total_tokens": 80
}
}

Converse operation

Use Converse to conduct multi-turn conversations with OpenAI models, maintaining context across requests within a conversation session. Unlike Generate, which handles one-time content-generation requests, Converse is designed to maintain context across requests within a conversation session. Converse supports one object type:

  • Converse

The Converse operation supports two modes:

  1. Stateful (default) - The connector automatically manages conversation context. If no conversation ID is provided in the request, a new conversation is created by the connector and the ID is injected into the request payload. Subsequent requests with the same conversation ID maintain context.
  2. Stateless - Each request is treated independently. The connector passes the payload through as-is without managing conversation state. A conversation ID can be explicitly supplied by the user if needed.

Converse

Conducts multi-turn conversations with OpenAI GPT models, maintaining session context for various use cases.

Before you begin

  1. Ensure your OpenAI account has access to the desired model
  2. Prepare JSON that matches the conversation schema
  3. Choose an appropriate model (for example, gpt-5, gpt-4, gpt-4o, etc.)
  4. Determine whether you need stateful or stateless conversation mode

Configure

  1. Create a new Converse operation or click Import Operation.
  2. Select Action Type: Converse
  3. Configure the Enable Stateful Conversation property:
    • true (default) - Conversation context is managed automatically by the connector
    • false - Each request is treated as a standalone interaction
  4. Click Next to import Request and Response profiles
  5. Configure mapping for your input messages and model selection

Operation properties

Enable Stateful Conversation (Boolean) - When enabled, the connector maintains conversation context across requests by automatically generating a Conversation ID if one is not provided. When disabled, each request is treated as a standalone interaction unless a Conversation ID is explicitly supplied.

Request Schema (CONVERSE)
{  
"model": "gpt-4o-mini",
"input": [
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "What is the capital of France?"
}
]
}
],
"instructions": "You are a helpful assistant. Provide concise and accurate answers.",
"temperature": 0.7,
"max_output_tokens": 500,
"top_p": 1.0
}
note

The conversation field is optional in the request payload. In stateful mode, if the conversation field is not provided, the connector automatically creates a new conversation and injects the conversation ID into the request. If the conversation field is already present, it is passed through as-is. In stateless mode, the payload is always passed through without modification.

Request Schema (CONVERSE — with existing conversation)
{  
"model": "gpt-4o-mini",
"input": [
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "And what is its population?"
}
]
}
],
"conversation": "conv_abc123",
"instructions": "You are a helpful assistant. Provide concise and accurate answers.",
"temperature": 0.7,
"max_output_tokens": 500
}
Response Schema (CONVERSE)
{  
"id": "resp_abc123",
"object": "response",
"created_at": 1699000000,
"model": "gpt-4o-mini",
"status": "completed",
"conversation": {
"id": "conv_abc123"
},
"output": [
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "The capital of France is Paris."
}
]
}
],
"usage": {
"input_tokens": 30,
"output_tokens": 15,
"total_tokens": 45
}
}

Generate operation import wizard

Step 1: Select operation type

  1. In your Boomi process, add a connector step.
  2. Select OpenAI connector.
  3. Choose your configured connection.
  4. Click Import Operation for a new operation.

Step 2: Select action type

For the Generate operation, select from:

  • Generate Text
  • Generate Images
  • Generate Embeddings
  • Generate Structured Output

Step 3: Configure structured output (if selected)

  1. If Structured Output is selected, the Output Schema Format (JSON) field appears.
  2. Paste your complete text parameter JSON with format and schema.
  3. The connector validates the JSON structure.

Step 4: Import profiles

  1. Click Next to proceed.
  2. The connector imports Request and Response profiles.
  3. For Structured Output, the Response profile is dynamically generated from your schema.
  4. Review imported profiles and click Finish.

Converse operation import wizard

Step 1: Select operation type

  1. In your Boomi process, add a connector step.
  2. Select OpenAI connector.
  3. Choose your configured connection.
  4. Click Import Operation for a new operation.

Step 2: Select action type

  • Select Converse as the operation type.

Step 3: Configure conversation mode

  1. The Enable Stateful Conversation property is displayed (default: true)
  2. Set to true if you want the connector to automatically manage conversation context (create conversation IDs if not supplied)
  3. Set to false if you want each request handled independently (stateless mode)

Step 4: Import profiles

  1. Click Next to proceed.
  2. The connector imports the Converse Request and Response profiles.
  3. Review imported profiles and click Finish.

Troubleshooting

Operation issues

  • Empty response or Invalid model specified: Verify model ID is correct
  • Truncated response or max_tokens too low: Increase max_output_tokens
  • Schema validation fails or Invalid Structured Output schema: Verify JSON schema syntax
  • Unexpected format or Wrong object type selected: Re-import operation with correct type

Structured output issues

  • Text Parameter error or Missing or invalid JSON: Verify format includes type, name, schema
  • Schema not applied or Schema not saved at design time: Re-import operation with schema
  • Response not parsed or Invalid schema structure: Verify JSON Schema is valid

Archiving tab

See the topic Connector operation’s Archiving tab for more information.

Tracking tab

See the topic Connector operation’s Tracking tab for more information.

Caching tab

See the topic Connector operation’s Caching tab for more information.

On this Page