OpenAI (Tech Preview) connector
The purpose of this connector is to provide a seamless, secure, and robust integration point between the Boomi integration platform and the OpenAI API. It enables Boomi developers to build processes that leverage powerful generative AI capabilities without needing to manage the complexities of REST API calls, authentication, and request handling from scratch. The OpenAI connector enables Boomi processes to interact with OpenAI's suite of AI models. You can perform the following by invoking OpenAI API endpoints with JSON payloads:
- Generate Text
- Generate Images
- Generate Embeddings
- Generate Structured Outputs
The connector provides streamlined configuration aligned with Boomi best practices for enterprise AI integrations.
For more information, refer to OpenAI API documentation.
Benefits
- Simplified access to OpenAI's complete model family through a single connector
- Support for multiple Action types: Generate Text, Images, Embeddings, and Structured Output
- Secure API Key (Bearer Token) authentication
- Dynamic schema generation for Structured Output based on user-defined JSON schemas
- Consistent request/response handling with JSON profiles aligned to OpenAI API schemas
Connector configuration
- OpenAI connection
- OpenAI operations
After you build the connection and operation, place the connector step in your process and map the request/response as required by your use case.
Prerequisites
- An OpenAI account with API access
- A valid OpenAI API Key (Secret Key) from your OpenAI account
- Appropriate usage limits and billing are configured in your OpenAI account
- Access to the desired models (some models require specific API access approval)
Supported versions and SDKs
- Boomi Connector SDK: 2.25.0
- Java: 8
- OpenAI API Version: v1 (latest)
Business use cases
- Generate high-quality product descriptions, marketing copy, emails, chat responses, or support message drafts from business input text.
- Automatically summarize long documents (contracts, reports, emails, tickets) into concise, actionable insights.
- Create product visuals, marketing banners, concept designs, and rapid UI/UX mockups using AI-generated images from text prompts.
- Enable semantic search and similarity matching across documents, knowledge bases, tickets, or customer records beyond keyword-based search.
Known Limitations
-
Latency: The performance of OpenAI operations is dependent on OpenAI service latency, the selected model, and request complexity. Response times can range from milliseconds to several seconds.
-
Rate Limiting: All requests are subject to OpenAI API rate limits. If a process exceeds these limits, the API returns an HTTP 429 Too Many Requests error. The connector reports this as an exception.
-
Usage Limits: Subject to OpenAI account-level usage limits, such as tokens per request, tokens per minute, and requests per minute. Limits vary by model.
-
Memory: The connector processes request and response payloads in memory. Very large payloads (for example, large prompts, embeddings, or Base64-encoded images) can impact Boomi Runtime memory usage.
-
Payload Size: Limited by OpenAI API payload size and maximum token limits for the selected model.
-
Streaming: Streaming responses are aggregated into a single response payload. Very large streamed outputs may require Atom resource tuning.
-
Image Generation: Generating multiple images in a single request can increase payload size and memory consumption on the runtime.
-
Model Availability: Model availability, behavior, and limits are controlled by OpenAI and may change over time. Deprecated models may no longer be available.
Best Practices
Performance Optimization
- Use appropriate models for your use case (smaller models for simple tasks).
- Set a reasonable
max_output_tokensvalue to control response length. - Batch embedding requests when processing multiple texts.
- Use lower
temperaturevalues for deterministic outputs.
Cost Management
- Monitor token usage via response metadata.
- Use
gpt-4o-minifor development and testing. - Implement caching for repeated queries.
- Set usage limits in your OpenAI account.
Security
- Use the
userparameter for abuse monitoring. - Implement input validation before sending to the API.
- Log request/response metadata (not content) for auditing.
Error Handling
- Always implement retry logic for transient errors
- Use exponential backoff for rate limit errors
- Log errors with request IDs for troubleshooting
- Set appropriate timeouts for your use case
Tracked properties
This connector has no predefined tracked properties. See the topic Adding tracked fields to a connector operation to learn how to add a custom tracked field.