Amazon Bedrock (Tech Preview) connector
The Amazon Bedrock connector provides you a seamless, secure, and robust integration point between the Boomi integration platform and the Amazon Bedrock API. It allows you to build processes that leverage powerful generative AI capabilities without managing the complexities of REST API calls, request signing, and authentication from scratch.
The Amazon Bedrock connector enables Boomi processes to interact with supported generative AI models on Amazon Bedrock. You can:
- Generate text
- Create images
- Produce vector embeddings
- Converse (runs conversational exchanges using message history)
by invoking Bedrock Runtime with model-specific JSON payloads.
The connector dynamically discovers models available to your AWS account and region, and provides streamlined configuration aligned with Boomi best practices.
For more information, refer What is Amazon Bedrock? - Amazon Bedrock
Benefits
- Simplified access to multiple Bedrock model families (foundation and custom) through a single connector
- Dynamic browsing of available models filtered by output modality (Text, Image, Embedding)
- Secure AWS SigV4 request signing with support for Access Keys and IAM Roles Anywhere
- Consistent request/response handling with JSON profiles aligned to model schemas
- Per-document invocation model for easy scaling and process control
Prerequisites
- Ensure that your Runtime environment is installed and running before you use the connector.
- An AWS account with access to Amazon Bedrock in the target region
- Appropriate IAM permissions to list models and invoke Bedrock Runtime
- One of the following credential options:
- AWS Access Key ID and Secret Access Key
- IAM Roles Anywhere profile with client certificate and private key
Connector configuration
To configure the connector, create these reusable components and then add them to your process:
- Amazon Bedrock connection
- Amazon Bedrock operations (Generate, Converse)
After you build the connection and operation, place the connector step in your process and map the request/response as required by your use case.
Supported versions and SDKs
- Boomi Connector SDK: 2.25.0
- Java: 8
- Uses Boomi AWS utilities for AWS SigV4 signing
Connector use case video
This video provides a step-by-step overview of generating a Shopify order summary using the Amazon Bedrock connector.
Business Use Cases Supported
- Generate high-quality product descriptions, marketing copies, emails, chat responses, or support message drafts from business input text.
- Automatically summarize long documents (contracts, reports, emails, tickets) into concise insights using text-generation models.
- Create product visuals, marketing banners, concept designs, and rapid UI/UX prototypes using AI-generated images.
- Produce semantic embeddings from documents, products, or knowledge-base content to enable intelligent search, similarity matching, and recommendations.
- Build intelligent chatbots and virtual assistants that maintain conversation context for HR, IT helpdesk, support, onboarding, and workflow guidance.
- Enable interactive data-driven assistants for multi-turn interactions in CRM, ERP, and enterprise applications to help users query, explore information, and take guided actions.
Known Limitations
- Latency: The performance of the Invoke Model operation is primarily dependent on the latency of the Amazon Bedrock service and the complexity of the request. Response times can range from milliseconds to several seconds.
- Rate Limiting: All calls are subject to AWS Bedrock's API rate limits. If a process exceeds these limits, the API will return an HTTP 429 Too Many Requests error. The connector will report this as an exception.
- Resource Limits: Subject to AWS Bedrock service quotas (e.g., tokens per request, requests per second per model).
- Memory: The connector processes request and response payloads in memory. Very large payloads (megabytes) could impact the memory usage of the Boomi Atom.
- Payload Size: Limited by Bedrock API payload size limits for both request and response bodies.
- Streaming: Streaming outputs are handled as a single response payload; very large outputs may require Atom resource tuning.
- Application Inference profile: Not supported.
- Provisioned throughput models: Not supported.