LiveView supports a cluster-wide alerting service for backend LiveView servers.
Multiple alerts can belong to an alert
group
, and multiple alert groups can be specified in a LiveView cluster.
In contrast to release 10.4, alerts processing is synchronized cluster-wide as of
10.5.0.
Each server supports exactly one
alert
group. On startup, the first node becomes the master (this is normal server behavior
even with no clustering) for alerts configured in its alert group. When other
LiveView nodes in the same alert group start up, they join the cluster and become hot
spares. From an alert group perspective, the current master remains in that role
until it is removed from the cluster. All servers issue the alert queries for all
alerts in its group, and hot spares put the alerts into a local queue and expect to
see the alerts being published from the master. Additionally, since configuration of
which servers are in alert groups is done at install time, there can be multiple
masters — each with their own alert group and behaving as described above.
Alerts are executed by one LiveView server (the master) in an alert group. Other servers in the cluster (that are in the alert group) serve as hot spares and do not execute alert actions; they simply verify the master's presence, waiting to take over if the master goes down.
Configuration of LiveView cluster-wide alerting is bound by the following behavior:
-
LiveView servers can belong to exactly one alert group
-
The front end LiveView server must be in its own alert group
Front end servers do not have direct access to the tables, and do not execute alerts. If a front end server is not in its own alert group, and it becomes the master, it will not execute any alerts.
-
Nodes with only EventFlow engines in LiveView clusters must be in their own alert groups, in addition to the other constraints placed upon these node configurations.
-
LiveView servers participating in the alert group must be homogeneous, meaning their configuration and required resources must be identical. This includes:
-
Alerts must be executable on every server
-
Identical LiveView tables
-
Identical LiveView metadata store
-
If an alert is not assigned to a specific alert group, it is assigned to the default
alert group (which is configured with the empty string: null
, &&
, or ""
).
TIBCO recommends using the default alert group for the vast majority of use cases. The default alert group requires no additional configuration; all backend servers in the cluster will simply process alerts.
For the non-default alert group case, you can specify multiple alert groups. In this
scenario, you designate which nodes belong to each alert group via the node name. The
default alert group is where you do not
add
the separator in the node name. However, suppose you name your node foo-ag_bar.cluster
. In this example bar
is the alert group the node is in, as it is
between the separator and the first dot ".
". If you name
your node foo.cluster
, your node is in the
default alert group. Note that alert group behavior described above still applies.
You must specify:
-
the alert group
-
the privileges to allow viewing the alert
- LiveView Engine Configuration
-
Use to:
-
define the metadata store required for alert group setup. If you specify a
JDBC
metadata store, you must also specify an EventFlow JDBC Data Source Group Configuration file as shown below.
-
- Role to Privileges Mappings Configuration
-
Use to:
Place your configuration files in the src/main/configurations
folder of your node's LiveView project.
Use a LiveView engine HOCON configuration file to:
-
optionally set a system property to define the alert group separator (non-default scenario)
-
specify the metadata store to enable resource (alerts) sharing
You must configure each LiveView server to use the same metadata store type (must be either JDBC or TRANSACTIONAL_MEMORY) in a LiveView Engine configuration file (the default store type is H2).
The example below defines a JDBC metadata store type called myJDBCSource
. If you define a JDBC data source in
the engine configuration file, you must use the same JDBC data source value in a
JDBC configuration file of HOCON type (see below):
name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
LDMEngine = {
ldm = {
metadataStore = {
storeType = "JDBC"
jdbcDataSource = "myJDBCSource"
jdbcMetadataTableNamePrefix = "LV_CLUSTER1_"
}
}
}
This example defines a transactional memory metadata store:
name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
LDMEngine = {
ldm = {
metadataStore = {
storeType = "TRANSACTIONAL_MEMORY"
}
}
}
This is the default alert group name separator:
name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
LDMEngine = {
systemProperties = {
"liveview.alerts.voting.dynamicpartition.pattern.suffix" = "-ag_"
}
}
}
Alert Group Separator Rules:
-
the separator must not be empty
-
changing the separator value's default of "
-ag_
" is not recommended. -
the contents of the separator value can be {upper or lower case letters, 0 to 9, _ and –}. Note the last character option is a hyphen and not an en nor em dash.
-
the same separator must be used across the entire cluster
-
dotted alert group names are not supported
Configure a EventFlow
JDBC Data Source Group configuration file only when specifying a JDBC
metadata store type (see above). Match your configuration to your
LiveView engine configuration's metadata store settings.
This example defines one JDBC source called myJDBCSource
.
name = "mydatasource"
version = "1.0.0"
type = "com.tibco.ep.streambase.configuration.jdbcdatasource"
configuration = {
JDBCDataSourceGroup = {
associatedWithEngines = [ "javaengine", "otherengine[0-9]" ]
jdbcDataSources = {
myJDBCSource = {
...
}
}
}
}
The Role to Privileges Mappings configuration file is required as described above. Your configuration file must define a user that includes the following privileges if authentication is enabled:
These privileges apply to non-default alert groups.
- LiveViewAlertGroupSet
-
Enables setting the alert group.
- LiveViewAlertGroupGet
-
Enables viewing alert groups.
- LiveViewAlertGroupAll
-
Enables viewing and setting on the alert group.
- General Alert Privileges
-
LiveView alerting privileges must also be enabled, not just alert group privileges specifically
In practice you often define users with multiple LiveView privileges, depending on user role. The following example defines a user with all LiveView privileges, which includes both privileges described above.
name = "my-role-mappings" version = "1.0.0" type = "com.tibco.ep.dtm.configuration.security" configuration = { RoleToPrivilegeMappings = { privileges = { admin = [ { privilege = "LiveViewAll" } ] } } }
This section discusses LiveView client-side alert rule configuration, which is required when setting up alerts across a cluster of LiveView nodes.
As mentioned previously, you can always set an alert rule to run in the default alert group, which is TIBCO's recommendation. For all other cases, you must have permissions to set the specific alert group.
If you provide an invalid node name or alert group name, LiveView assumes the name is correct (for example, for a future node of that name that may join the cluster). See Server-Disabled Alert Rules for more information.
If you configure an alert rule alert group with a star "*" the alert runs the alert on every node in the cluster (as it did in before release 10.5.0, when alert groups were introduced).
Clients must be 10.5.0 or above in order to configure alert groups. Otherwise, clients default to the default alert group.
The "other special alert group name" is the node name itself. Configuring the alert rule to run on the node name runs the alert on that one node only.
The following shows alert rule configuration for an alert group using lv-client:
lv-client
...
addalertrule --alertgroup myalertgroup