Skip to main content
Feedback

Testing and troubleshooting an agent

After you create an agent, you can test it in the Test Agent window before deploying it. The agent’s responses include a Show Trace section where you can see details on the agent’s response, reasoning, and the tools it uses. These details can help you fine-tune your agent and troubleshoot issues with responses and behavior.

The agent trace is only available in the Test Agent window. You must deactivate a deployed agent before you can test the agent and view the agent trace.

image showing test agent window and trace

For deployed agents, you can view session logs containing details of the agents behaviour, performance, and troubleshooting issues. Read Tracing sessions of Boomi Agent Garden agents to learn more.

looping image opening session logs from the log button in the chat

Testing your agent

tip

Test your agent iteratively. Test your agent before and after you add tasks and guardrails. Testing iteratively helps you easily identify which configuration is causing an issue and which configurations are working correctly.

  1. In Agent Garden > Agents, open your agent and converse with it in the Test Agent window.

    note

    Testing your agent may count against any usage limits.

  2. Select Show Trace to view details about the agent's reasoning, tool responses, latency, and more to troubleshoot and fine tune agent behavior.

    screenshot showing agent trace link

    You can copy tool response code for Action steps.

Agent trace field reference

Thinking step fields

A thinking step shows the agent's reasoning.

FieldDescription
rationaleDescribes agent reasoning during the step.
latencyMsTotal time taken by the LLM to generate and complete its response (in milliseconds)
inputTokensNumber of tokens sent to the LLM as input for a reasoning step
outputTokensNumber of tokens generated by the LLM in a reasoning step
ttftTime elapsed from the LLM request submission until the first set of tokens are received (in milliseconds)

image shows thinking type and detail of LLM reasoning

Action step fields

An action steps shows tool usage and response data.

FieldDescription
toolNameName of the tool you created in Agent Designer that was invoked
toolIdUnique identifier for the tool
toolTypeCategory of tool used. Valid values: MCP, API, Application, Integration, DataHubQuery, Prompt
requiresApprovalBoolean indicating whether user approval is required before tool execution
inputParameters passed to the tool (e.g., latitude, longitude)
responseRaw response data returned by the tool after execution
latencyMsTime taken by the tool to run the tool call (in milliseconds)
successBoolean indicating whether the tool execution completed successfully

Invocation metrics fields

FieldDescription
countNumber of times the LLM has been called
inputTokenCountNumber of tokens in the input
outputTokenCountNumber of tokens in the output
averageLatencyAverage time in milliseconds to process the LLM request
ttftTime elapsed from the LLM request submission until the first set of tokens are received (in milliseconds)
durationMsTotal time to complete the agent invocation, from start to finish (in milliseconds)

Guardrail fields

Topic policy

FieldDescription
topicPolicyDescribes how the LLM applied topic-based filtering
nameThe name of the policy from the Guardrails tab
typeType of restriction (e.g., DENY)
ActionAction taken (e.g., BLOCKED)

Word policy

FieldDescription
wordPolicyDescribes word-based filtering that causes a user's words to block the agent from responding
customWordsDisplays the number and list of blocked words configured by the user in the guardrail. Match is the blocked word and Action describes the action the agent took ("BLOCKED")
managedWordListsDisplays the number and list of blocked words which are applied by default for all agents. Match is the blocked word, Action is the action agent took ("BLOCKED"), and the Type is the category of the word (e.g., PROFANITY, INSULTS)

Sensitive information policy

FieldDescription
sensitiveInformationPolicyDisplays the number and list of RegEx matches that are configured by the user to prevent the agent from processing and producing sensitive information that matches a RegEx pattern
NameThe name of the Policy in the Guardrails tab
MatchThe word or phrase that matched
regexThe pattern it matched to
ActionThe action that Agent took ("BLOCKED")

Content policy

FieldDescription
contentPolicyDefault content filters applicable to all agents for the following categories: HATE, SEXUAL, VIOLENCE, INSULTS, MISCONDUCT, PROMPT_ATTACK. These filters prevent agents from behaving inappropriately and in an unsafe manner
TypeThe category of filter that was triggered
ConfidenceA numerical score between 1-100 useful in determining how provoking a given prompt was
Filter StrengthThe strength at which filter is configured (the only value is "HIGH")
ActionThe action that the agent took ("BLOCKED")

Troubleshooting tips

Instructions

  • Be specific and detailed: You may need to adjust your instructions or add additional tasks so that the Large Language Model (LLM) understands how to behave in certain situations. It may not have enough information or context to act appropriately. This can cause incorrect reasoning to show in the trace.

  • Include timelines and action triggers: Tell the agent when to do an action. This can correct issues where the agent is not following instructions in the way you want it to. For example, “After you get information from the database about X, confirm with the user that they want to do X.” “Before you do X, ask the user for the X parameter to make the API call using the X tool.”

Read Writing tasks and instructions for instruction best practices.

Tools

  • Make changes to tool configuration: Your tool configuration may need adjustment to work correctly. The trace can indicate if the agent is having trouble using a specific tool during a tool step. Review Building an agent for more information.

  • Ensure your tool is linked to the correct task: Your tool needs to be attached to the same task where it is relevant. You can attach a tool to multiple tasks. You may need to add additional instructions in the task that tell the agent when to use the tool for that particular outcome. For example, “Use the X tool to query the database and get information about X.”

API tools

  • Remove any extra spaces surrounding parameters: Extra spaces can cause an error when the agent calls the API.

  • Test API authentication: Test the API endpoint using Postman or a similar tool. Ensure the API call is successful and you have entered the correct credentials.

  • Check for duplication: Do not duplicate the URL in the API tool for the endpoint path. The API tool adds the base URL and the endpoint path to create the API call. For example, entering base URL and then base URL + endpoint path would duplicate the base URL and cause the tool to call baseURLbaseURL+endpoint path, causing an error.

    example of duplication

Guardrails

  • Adjust guardrails: Evaluate and adjust guardrails so they do not limit the agent from accomplishing the task. Guardrails cause the agent to respond with the blocked message you configured (for example, "I'm sorry but I'm only able to provide an order status and customer support contact information."). The trace can indicate when and how the LLM triggered the guardrail while following instructions.

image shows word policy matching a work and a blocked action type

Troubleshooting agent performance

  • Consider instruction clarity: Evaluate instructions and ensure that they do not cause conflicting actions that may conflict with LLM reason and logic or cause unnecessary action.

  • Quick Inference: For simple agents, such as agents that perform sentiment analysis, summarization, and data formatting, turn on Quick Inference in the agent Profile configuration.

On this Page