Amazon Bedrock (Tech Preview) operation
The Amazon Bedrock operation lets you discover and invoke AI models on AWS Bedrock. You can use it to generate content, run conversations, or retrieve embeddings using either static or dynamic model discovery modes.
Select the operation type
When you create a new connector operation, you can choose one of the following options:
-
Generate – Invoke a model to create text, images, or embeddings.
-
Converse – Run conversational exchanges using message history.
Choose Generate for one-shot inference requests, or Converse to maintain chat-style interactions with message context.
Choose the Model Discovery Mode
The connector supports two model discovery modes that determine how models are selected and invoked.
-
Dynamic Model Discovery Mode (default) – Retrieves the list of available models directly from AWS Bedrock.
-
Static Model Discovery Mode – Allows you to manually specify the model ID, ARN, or Inference Profile ID.
Using dynamic model discovery mode
Use the dynamic mode to retrieve a list of all available models from Amazon Bedrock that currently support Amazon, Anthropic and Meta model providers. This mode is beneficial when you want to explore or select from multiple models without needing to know their identifiers in advance.
-
Click Import Operation to add or update respective details about Runtime, Connection etc.
-
Select Dynamic from the Model Discovery Mode options.
-
Review the list of available models from AWS Bedrock and select the desired model.
Following are the three model invocation types:
a. On-demand model Invocation
This refers to the ability to access specific Amazon Bedrock models hosted within a region as needed. This mode allows you to invoke models dynamically based on their requirements.
For more information, refer to the Amazon Bedrock Supported Models Documentation.
b. Cross-region inference
This allows you to invoke a model endpoint hosted in a different AWS region. Amazon Bedrock provides pre-built cross-region inference endpoints, enabling higher overall throughput — both in Tokens Per Minute (TPM) and Requests Per Minute (RPM) — for supported models.
For more information, refer to the Amazon Bedrock Inference Documentation.
c. Custom deployment
This is a feature that allows you to deploy your customized Amazon Bedrock models for on-demand inference. When you create a custom model deployment, you receive a dedicated endpoint with a unique Amazon Resource Name (ARN) that serves as the model identifier for inference requests.
For more information, refer to the Amazon Bedrock Custom Model Deployment Documentation.
-
Continue based on your operation type selection:
If you selected Generate operation, select one of the following sub-actions under Generate:
-
Generate Text
This action produces model-generated text (for example, answers, summaries, or chat responses) from text prompts.
Prerequisites
- Ensure your IAM permissions allow invoking Bedrock Runtime in the selected region.
- Choose a text-capable model (for example, Anthropic Claude, Amazon Nova Text, or Amazon Titan Text).
Configuration
- Bedrock connector lists text-capable models only.
- The wizard imports Request and Response profiles for the chosen model family.
- Tracking direction is output documents; the connector streams your request from the inbound document.
-
Generate Image
This action creates images from prompts using an image-capable model.
Prerequisites
- Ensure you have permissions to invoke Bedrock Runtime.
Choose an image-capable model, which appears with its Inference Type prefix followed by the model name.
For example:
(ON-DEMAND) amazon.titan-image-generator-v1
Configuration
- After you select action type as Generate Image, select the image-capable model.
- The wizard imports Request and Response profiles for the image model family.
- Tracking direction is output documents.
- Ensure you have permissions to invoke Bedrock Runtime.
Choose an image-capable model, which appears with its Inference Type prefix followed by the model name.
For example:
-
Generate Embedding
This action returns numerical vector embeddings for input text.
Prerequisites
- Ensure you have permissions to invoke Bedrock Runtime.
Choose an embedding-capable model, which appears with its Inference Type prefix followed by the model name.
For example:
(ON-DEMAND) amazon.titan-embed-text-v1
Configuration
- After you select action type as Generate Embedding , select the embedding-capable model.
- The wizard imports the embedding Request profile; the response is the model-native JSON array of floats.
- Tracking direction is output documents.
- Ensure you have permissions to invoke Bedrock Runtime.
Choose an embedding-capable model, which appears with its Inference Type prefix followed by the model name.
For example:
If you selected Converse operation:
-
Converse
This operation runs conversational exchanges where you send message history and receive the next assistant response.
Prerequisites
- Ensure you have permissions to invoke Bedrock Runtime.
- Choose a text-capable chat model (for example, Anthropic Claude or other supported chat models).
- Prepare JSON that includes messages array with roles and content.
Configuration
- When you select a chat-capable model object:
- The operation uses the model’s TEXT input/output schemas for chat-style requests.
- Tracking Direction is output documents.
Limitations
The Converse API in Amazon Bedrock offers a more consistent interface for interacting with models, but it does not provide complete standardization across all models due to Amazon Bedrock limitations.
-
Partial Feature Support: Not all models support every parameter or feature within the Converse request. Including unsupported parameters may result in errors such as 400 Bad Request.
-
Varying Feature Availability: Feature support still differs by model, so not all capabilities are universally available.
-
Payload Re-usability: You cannot use a single request payload across all models; you need to make adjustments accordingly for each model.
Developer considerations
Some best practices to ensure reliable and robust integrations with the converse operation:
-
Model Capability Checks: Check model capabilities before sending requests to avoid unsupported features.
-
Feature Validation: Validate parameters and features to ensure compatibility with the selected model.
-
Parameter Handling: Account for differences in parameter support and handling between models.
For more information, refer to the Converse - Amazon Bedrock
Browse Time Invokable Identifiers
When you browse and select a model object, the connector adds an overrideable field that resolves the invokable modelId/arn at runtime. The field label depends on the Inference Type of the selected object.
| Inference Type | Identifier Field | Example |
|---|---|---|
| On-Demand Foundation Model | Model ID | amazon.titan-text-express-v1 |
| On-Demand Custom Model | Custom Model Deployment Name | CustomModelDeployment-MyModel |
| Inference Profile | Inference Profile ID | us.amazon.nova-micro-v1:0 |
For more information, refer to the InvokeModel - Amazon Bedrock
You can override this field using a Dynamic Operation Property at runtime. If the identifier is missing or invalid, the request fails with a model not found error.
Using static model discovery mode
Use the static model discovery mode, to manually specify a model identifier. This mode is ideal for scenarios where you know the exact model you want to invoke. Also, this mode enables you to work with unsupported or custom models that are not part of the connector’s predefined model list and acts as a fallback mechanism if the dynamic model definitions are unavailable, deprecated, or cause validation failures.
-
Click Import Operation to add or update respective details about Runtime, Connection etc.
-
Select Static from the Model Discovery Mode options.
-
Enter the Model ID or Custom Model Deployment ARN or Inference Profile ID in the Model Identifier field.
-
Click Next and then click Finish to generate the schema.
You can retrieve model identifiers by selecting Custom Models under Tune from the Amazon Bedrock Console.
Finding Custom Model Deployment ARN and Inference Profile ID
-
Open your web browser and navigate to https://console.aws.amazon.com.
-
Sign in with your AWS account credentials (IAM user or root account).
-
Ensure you have the necessary permissions to access Amazon Bedrock.
-
In the AWS Management Console, search for Bedrock in the services search bar at the top and select Amazon Bedrock from the results or directly navigate to https://console.aws.amazon.com/bedrock.
- Custom Model Deployment ARN
- Inference Profile ID
-
In the Amazon Bedrock console, click Tune in the left sidebar and select Custom models to refer to the list of custom models such as:
- Custom model name
- Model Status
- Customized model
- Type
- Inference set up
- Share status
- Creation time
-
Select and click the custom model you want to view deployments for.
You will find the Deployment ARN in the CustomModelDeployment section.
For more details, refer to the Amazon Bedrock Custom Model Deployment ARN documentation.
- In the Amazon Bedrock console left navigation pane, under the Infer section, click Cross-Region inference.
This opens the Cross-Region inference dashboard where you can refer to the list of available system-defined inference profiles. These are pre-configured profiles that Amazon Bedrock provides for cross-region routing and each profile shows:
- Model name (e.g., Claude, Llama, etc.)
- Supported regions
- Profile identifier/ID
- Copy the Profile ID. For example, for global Claude Sonnet 4.5, the ID will be:
global.anthropic.claude-sonnet-4-5-20250929-v1:0
For more details, refer to the Amazon Bedrock Cross-Region Inference Profiles documentation.
Generate the schema
After you select the discovery mode and model, click Next, then Finish to generate the request and response profiles for your operation.