The cluster awareness feature allows you to designate certain operators or adapters in your EventFlow module to start or stop based on runtime conditions in the cluster of nodes currently running the operator or adapter.
Note
To prevent redundancy, this article henceforth uses the term "adapter" where "operator or adapter" would be more accurate.
Under standard circumstances, all of the adapters in an EventFlow module start when the JVM engine that runs the module starts; likewise, all adapters stop when the engine stops. If you have structured your EventFlow module using separate StreamBase containers, then all of a single container's adapters normally start and stop when the container starts and stops.
But there are cases where you might need an adapter to start and stop independent of its containing module or StreamBase container. These cases include the following:
-
In general, an adapter that connects to an external data source might need to be managed according to its containing node's active or inactive participation in its cluster.
-
In an active-standby HA configuration, only adapters on the active node should be started, with adapters and operators on standby nodes disabled. It is true that you can manage this case by stopping the enclosing StreamBase container on standby nodes, but you might have architectural reasons to stop only externally-connected cluster-aware adapters in the container.
-
One EventFlow module and node might be tasked with periodic polling of data from an external database, using a Metronome operator to avoid unnecessary load on the database. When data is fetched from the database, it is stored in a Query Table to be distributed across all nodes in the cluster. On failure of the polling node, an alternative node becomes responsible for the polling operation.
Starting with TIBCO Streaming 10.5.0, cluster awareness is an attribute of:
-
The Distributed Router operator from the Palette view
In addition, you can add cluster awareness to Palette operators such as the Metronome or Heartbeat operators by following it in the EventFlow sequence with a Cluster State Filter operator.
You can determine the cluster awareness status of a currently running operator or adapter with the epadmin display cluster or display operator commands. For example:
epadmin --ad=49865 display adapter –path=ReadFile1
Engine = com_tibco_sb_sample_adapter_csvreader_csvreader_sample0
Path = default.ReadFile1
Status = SHUTDOWN
Adapter = TIBCO StreamBase CSV File Reader
Cluster Aware = Start with module
epadmin --ad=19118 display operator --path=SendEMSMsg
Engine = com_sample_adapter_jms_EMSSimpleSample0
Path = default.SendEMSMsg
Type = java
Status = STARTED
Cluster Aware = Active on single node
This section describes the options you can set in the Cluster Aware tab in the Properties view.
This property is the primary designation of cluster awareness of the current operator or adapter. The settings are:
- Start with module
-
This operator or adapter is automatically started at the same time as its containing module. This is the default state; this setting was the only option for TIBCO Streaming releases before 10.5.0.
- Don't automatically start
-
This operator or adapter is loaded with the JVM engine, but does not start until you send an epadmin container resume command (or its sbadmin equivalent), or until you start the component with StreamBase Manager. Use epadmin display container or sbc list -C
containerName
to determine the name of the operator or adapter to address. - Active on a single node in the cluster
-
If the StreamBase Runtime determines that the containing local node is the single active node on the cluster, this operator or adapter is started. If the local node is not the single active node, the operator or adapter is stopped.
- Active on an availability zone
-
If you have structured your cluster into two or more availability zones, you can specify the name of a target zone in the Availability Zone field on this Properties view tab. In this case, if the StreamBase Runtime determines that the node that contains this operator or adapter is a member of the designated zone, the operator or adapter is started. If the node is not a member, the operator or adapter is shut down.
Specify availability zones in a Runtime Node configuration file where the HOCON type is
com.tibco.ep.dtm.configuration.node
. The name of the default availability zone is filled in for this field. - Active on multiple nodes in the cluster
-
Specify an integer number of desired active nodes in the Number of active nodes field on this Properties view tab. If the StreamBase Runtime determines that the node that contains this operator or adapter is active, and the number of active nodes is within the specified limit, this operator or adapter is started. If the local node is not one of the currently active nodes, this operator or adapter is shut down.
- Active with specified partitions
-
If you have structured your cluster into partitions, you can specify the names of one or more partitions in the Active Partitions control on this Properties view tab. If the StreamBase Runtime determines that one of the designated partitions is changed to active on the local node, this operator or adapter is started. If any of the designated partitions becomes active on a remote node, or becomes unavailable, this operator or adapter is shut down.
Specify partitions in a Runtime Node configuration file where the HOCON type is
com.tibco.ep.dtm.configuration.node
.
In addition to the start policy specified above, you can have this operator or adapter shut down if quorum is lost for any availability zone configured for the containing node. If quorum is regained, the operator or adapter is restarted.
This control is inactive unless you select Active on an availability zone. Specify the name of the zone in which you want this operator or adapter to be active. The zone name you enter must be defined in a Runtime Node configuration file.
This control is inactive unless you select Active on multiple nodes in the cluster. Specify an integer of 2 or more.
This control is inactive unless you select Active with specified partitions. The partition names you specify must be defined in a Runtime Node configuration file.