Performance and resource considerations for advanced properties
The Advanced tab includes properties that control how the runtime allocates threads, memory, disk, and execution queues. Misconfigured values can lead to resource exhaustion, queued or discarded executions, or degraded throughput. The sections below describe the resource implications of each property.
Execution control
-
Maximum Simultaneous Executions per Node (
com.boomi.container.maxRunningExecutions)This property sets the maximum number of concurrent executions that can run on a single node. It applies to both thread-based and JVM-based (forked) execution modes.
The default values are 50 for JVM-based executions and 100 for thread-based executions. This property applies to all execution modes: low latency, bridge, and general.
When the limit is reached, new execution requests enter a queue rather than starting immediately. Use the Maximum Queued Executions per Node and Timeout for Queued Executions per Node properties to control queue behavior. If this property is not set, queuing is disabled and the number of concurrent executions is unlimited, which can exhaust available memory and CPU under high load.
Resource tradeoffs. Increasing this value allows more concurrent work but increases memory and CPU consumption proportionally. Setting an excessively high value on a node without sufficient resources can degrade performance for all running executions.
-
Maximum Queued Executions per Node (
com.boomi.container.maxQueuedExecutions) and Maximum Queued Forked Executions per Node (com.boomi.container.forker.maxQueuedForkedExecutions)These properties set the maximum number of executions that can wait in queue when the simultaneous execution limit is reached. Boomi does not enforce a maximum value.
Queued executions wait for up to the duration defined by Timeout for Queued Executions per Node (
com.boomi.container.queuedExecutionTimeout), which defaults to 59 seconds.When the queue is full and a new execution cannot be accepted, the execution is discarded and the following warning is logged:
Submitted process <execution ID> discarded due to system load: execution queue maximum size reached - queue size: <maxQueuedExecutions>A
ContainerOverloadedExceptionis also thrown with the message:Submitted process <execution id>: execution queue reached the maximum size - queue size <maxQueuedExecutions>. Process discarded due to system load.Resource tradeoffs. Larger queue sizes have minimal direct resource impact, but a large backlog of queued executions can indicate that the node is chronically overloaded and requires either additional nodes or a lower execution limit.
Forked execution
-
Maximum Simultaneous Forked Executions per Node (
com.boomi.container.forker.maxRunningForkedExecutions) and Maximum Simultaneous Forked JVMs per Node (com.boomi.container.forker.maxRunningForkedJVMs)These two properties work together to limit concurrent forked executions on a node.
Maximum Simultaneous Forked Executions per Node counts only forked process executions. Maximum Simultaneous Forked JVMs per Node counts all JVMs on the node, including process executions, execution workers, and browse workers.
The effective execution limit on a node is the smaller of these two calculations:
- Maximum Simultaneous Forked JVMs per Node minus currently running workers
- Maximum Simultaneous Forked Executions per Node
To limit processes running in bridge or low latency mode separately from general-mode executions, use these two properties in combination. Setting Maximum Simultaneous Forked JVMs per Node lower than the sum of all possible JVMs creates a cap that applies across all execution types on the node.
-
Maximum Execution Threads per Forked Execution (
com.boomi.container.forker.maxUserThreads) and Maximum Execution Threads per Execution Worker (com.boomi.container.worker.maxUserThreads)These properties set the maximum number of threads a single forked execution or Atom worker process can create. Boomi does not impose an upper limit; the practical ceiling is set by JVM and operating system limits.
These threads are allocated exclusively for process execution and are not shared with other runtime processes.
When the limit is reached, a
SecurityExceptionis thrown with the message:Exceeded thread limit of <maxUserThreads value>Resource tradeoffs. Higher thread limits increase the potential for parallel work within a single execution but consume more memory and OS thread handles per JVM.
-
Maximum Forked Execution Time in Cloud (
com.boomi.container.maxExecutionTime)This property sets the maximum duration a forked execution process is allowed to run. Boomi does not enforce a maximum value for this property.
When a process exceeds the limit, the following are logged:
Attempting to cancel execution <executionId>, started at <Start Date Time>
Aborting <Process>: Process exceeded maximum execution time limitResource tradeoffs. Increasing this value reduces the risk of terminating legitimate long-running processes but increases the risk of threads or JVMs remaining occupied by hung or stalled executions. A high value combined with high concurrency can leave insufficient resources available for new executions.
Flow control
-
Maximum Flow Control Units (
com.boomi.container.flowControl.maxUnitCount)This property sets an upper bound on the number of flow control units, overriding whatever value is configured in the Flow Control step. Boomi does not enforce a maximum value.
When a process specifies a higher chunk count than this property allows, the runtime silently uses this property's value instead and logs:
Limiting given chunkCount <flow shape chunk count> to maximum <maxChunkCount>No error is thrown.
Resource tradeoffs. Higher values allow more concurrent threads or JVMs from Flow Control steps, which increases memory and CPU consumption. Ensure that sufficient heap and CPU are available before raising this value.
Listener
-
Maximum Number of Active Batches per Listener (
com.boomi.container.connector.sdkListener.maxBatchLimit)This property sets the maximum number of active listener batches a single listener instance can create for SDK connectors. A listener batch is a set of documents being processed by the listener at a given time. Boomi does not enforce a maximum value.
When the limit is reached, an
IllegalStateExceptionis thrown:exceeded active batch limit: <maxBatchLimit>Resource tradeoffs. Increasing this value has negligible memory impact. Each batch holds a batch ID and a reference to a data store; the actual batch data is stored on disk rather than in heap.
Memory and document mapping
-
Maximum Document Elements Cache Size (
com.boomi.container.transform.maxCacheSize)This property sets the maximum number of document elements (data nodes) to keep in memory per parsed document when Low Memory Mode is active. The default is 10,000.
When a document is parsed for mapping, a tree of data nodes is created corresponding to each profile element and its associated data. If the number of nodes in a parsed document exceeds this threshold, additional nodes are written to disk rather than held in memory.
Lowering this value reduces heap usage during mapping at the cost of increased disk I/O. Raising it keeps more data in memory, reducing disk access but increasing memory pressure. For large documents, tune this value in combination with heap allocation to avoid excessive disk thrashing.
-
Clean Up Custom Script Engine Data on Completion (
com.boomi.container.resetScriptEngineData)When set to
true(the default), this property causes the scripting engine to immediately release any key/value data retained from the current execution when the execution completes.If disabled, the scripting engine retains that data in memory until the start of the next execution, at which point it is cleared. This increases the memory footprint between executions and can cause unexpected behavior if stale data is inadvertently accessible during the interim period.
Recommendation. Leave this property set to
trueunless you have a specific reason to retain script engine data between executions and have accounted for the additional memory usage.
Purge settings
-
Compress History after x Days (
com.boomi.container.compressDays)This property sets the number of days after which logs, processed documents, and temporary data are automatically compressed on disk.
Disabling compression (by not setting a value) causes uncompressed data to accumulate on disk, increasing storage consumption over time. Compression reduces disk footprint with minimal runtime performance impact.
-
Purge Schedule for Components, Logs, Processed Documents, and Temporary Data
The following properties control how long data is retained before being purged:
Property Scope com.boomi.container.component.purgeDaysComponent data (default: no purging, value 0) com.boomi.container.logs.PurgeDaysLog data com.boomi.container.data.PurgeDaysProcessed document data com.boomi.container.temp.PurgeDaysTemporary data The log, processed document, and temporary data properties default to the account-level property
com.boomi.container.purgeDays, which defaults to 30 days.Resource tradeoffs. Reducing purge schedule values to more aggressive (shorter) retention periods can cause an initial spike in CPU and disk I/O as the purge process catches up to the new targets. After the initial purge completes, performance returns to normal. Plan purge schedule changes during off-peak periods when possible.
-
Purge Manager Threads (
com.boomi.container.purge.numPurgeThreads)This property sets the number of threads used for purging logs and data. The value cannot be set to 0.
- Default for runtimes: 1
- Default when run from the Runtime Maintenance Server: 10 (configured via the
--num-purge-threadscommand-line argument)
Setting a value greater than 1 creates a thread pool of that size. Multiple concurrent purge threads increase total thread count and can increase memory and CPU usage when multiple threads are actively purging.
Note for clusters and runtime clouds In a cluster or cloud, all nodes purge files from their local working data directory. The head node has the additional responsibility of compressing and purging files from the shared file system. The head node must complete file share purges before purging its own local working directory. If the head node falls behind on file share purges, local working directory purges are deferred until a head node switch occurs. This can result in disk space exhaustion and performance degradation on the local working directory. Monitor head node purge activity in high-throughput environments.
EDI control ID cache
-
Control ID Cache Idle Timeout (
com.boomi.container.controlid.idleTimeoutSec) and Control ID Cache Time to Live (com.boomi.container.controlid.ttlTimeoutSec)These properties improve the performance of inbound EDI document handling in multi-node runtime clouds by caching control IDs in memory rather than fetching them from storage on each request.
Property Maximum value Description com.boomi.container.controlid.idleTimeoutSec300 seconds Time a control ID object remains in cache without being accessed before expiry com.boomi.container.controlid.ttlTimeoutSec300 seconds Maximum time a control ID object remains in cache regardless of access com.boomi.container.controlid.maxids20 Maximum number of control IDs held in cache at one time Resource tradeoffs. Maintaining control IDs in cache improves performance at the expense of memory. Both timeout properties determine how long unused control ID objects remain in cache. Increasing these values beyond the defaults retains more objects in memory for longer. The maximum values are enforced by Boomi and cannot be exceeded.