Atom Queues - Legacy
Atom Queues are now Legacy. We recommend using Event Streams for your event-based and messaging use cases.
Atom Queues are a lightweight messaging solution embedded directly within integration runtimes that provide basic eventing capabilities for use between Boomi integration processes. Messages are managed by a local message broker called the Shared Queue Server and are stored locally for guaranteed delivery. Atom Queues support messaging patterns including point-to-point and pub-sub between Boomi integration processes only. They are not accessible to external clients.
While Atom Queues provide ease of deployment, they lack many of the features of an enterprise queuing system, such as message filtering, prioritization, external client access and variable persistence guarantees. If Atom Queues do not meet your requirements, consider Boomi Event Streams, a third party cloud messaging solution, or installing a local messaging solution and connect using the JMS or Kafka connectors for example.
How Atom Queues Work
The following describes the main concepts of Atom Queues.
-
Reusable queue components are configured at the account level. Each queue component specifies the configuration of a message queue, including its name and the messaging model with which the message queue can be used, either Point-to-Point or Publish/Subscribe.
-
A runtime’s shared queue server creates a message queue upon invocation of a Boomi Atom Queue connector Get, Send or Listen operation that specifies a queue component.
-
Messages consist of references to documents rather than actual document content. Each message references one or more documents. The document metadata in messages includes dynamic document properties.
-
Messages persist on a message queue until one of the following occurs:
-
The message is consumed.
-
The documents that the message references are purged.
-
Benefits
The benefits of having a native message queuing implementation can be categorized as follows:
-
Asynchronous communication — Processes writing and reading data execute independently of each other. Requests are batched in real-time. Enqueuing requests is more efficient than spawning an individual execution for each real-time message or writing messages to disk for later batch processing in a scheduled process.
-
Decoupling — Producers and consumers of messages are independent and can evolve separately at their own rate. Workflow is encapsulated into logical, reusable units — that is, separate processes — which are organized, maintained and deployed independently.
-
Multiple recipients — Messages can be sent to multiple recipients independently. Likewise, their receipt can be monitored and retired independently.
-
Redundancy — Message queues can persist messages until they are fully processed.
-
Resiliency — Decoupling implies failures are not linked, thereby mitigating the risk of unreliable applications. A producer can continue to enqueue messages while consumers are temporarily disabled.
Typical usage scenarios
Following are typical integration scenarios for Atom Queues:
-
Scenario 1: Requirement for fully disconnected process execution.
This requirement is common in the case of services with known reliability issues. Having a separate processing path enables more granular retries. In this scenario a Boomi Atom Queue connector
Sendoperation would send failed documents to a Point-to-Point message queue in batches. In another process, a Boomi Atom Queue connector operation — eitherListenor scheduledGet— would receive the batches. -
Scenario 2: Requirement for aggregate AS2 document processing — a batch process operating upon separate incoming documents.
In this scenario the AS2 Shared Server connector would listen for incoming documents, and a Boomi Atom Queue connector
Sendoperation would send them to a Point-to-Point message queue. In another process a scheduled Boomi Atom Queue connectorGetoperation would receive documents in batches. -
Scenario 3: Requirement for dispersed document processing — incoming documents processed in parallel, with failed documents retried independently.
In this scenario a primary process would have a Boomi Atom Queue connector
Sendoperation to send documents to a Point-to-Point message queue in small batches, in some cases a single document. In another process a Boomi Atom Queue connectorListenoperation would receive the batches and in subsequent steps route the documents for concurrent processing. -
Scenario 4: Consider a requirement to route messages between runtimes.
-
The server runtime would act much like an enterprise service bus (ESB). It would deploy two Web Services Server processes, one using a Boomi Atom Queue connector
Sendoperation to send documents to a Point-to-Point message queue for consumption by client runtimes and the other using a Boomi Atom Queue connectorGetoperation to receive documents sent by client runtimes from a Point-to-Point message queue. -
Each client runtime would deploy two Web Services SOAP or HTTP client processes, one using a Boomi Atom Queue connector
Getoperation to receive documents sent by the server runtime from a Point-to-Point message queue and the other using a Boomi Atom Queue connectorSendoperation to send documents to a Point-to-Point message queue for consumption by the server runtime.
-
-
Scenario 5: Consider a requirement for a hub and spoke system in which documents are produced in the hub and made available for consumption on a variable population of spokes.
In this scenario the publisher (hub) would have a scheduled process that executes a Boomi Atom Queue Connector
Sendoperation to send documents to a Publish/Subscribe message queue. At any given time the message queue would have zero or more subscribing message queues (spokes). Each subscriber would be executing a process in which a Boomi Atom Queue connectorListenoperation would receive published documents. Because subscribers are unknown to the publisher, they can activate or deactivate at any time without necessitating a modification to the publishing process.
Limitations
Atom Queues are subject to the following limitations:
-
Messages cannot be sent between accounts.
-
Messages cannot be sent directly between runtimes.
-
Atome Queues cannot be accessed directly by external clients.
Using the messaging system
Atom Queues are supported in all runtime types, including multi-tenant runtime clouds.
To use the messaging system, you need to:
-
Create queue components. See the Queue components topic for more information.
-
Configure the runtime’s shared queue server, if you are the runtime’s owner. The default configuration is likely to be suitable for your purposes, at least initially.
-
Build and deploy processes that use Boomi Atom Queue connector operations that specify the queue components you created.
-
Perform message queue management actions as needed.
Listener management
You can view the status of listener processes that are deployed to a basic runtime, runtime cluster, or runtime cloud to retrieve messages from a message queue by going to the Listeners panel in Manage > Runtime Management. In this panel, you can also pause, resume, and restart listeners.
Monitoring the message queue service
The following metrics are available for monitoring a runtime’s message queue service:
-
Overall status
-
Message store disk usage
-
Temporary data store disk usage
-
Job scheduler store disk usage
-
Memory usage
In order to monitor these attributes, you need to use a systems management tool, such as Zabbix, and use a JMX hook (a standardized interface for Java programs). See the System Monitoring with JMX topic for more information.