Cluster-Wide Alert Configuration

Overview

LiveView supports a cluster-wide alerting service for backend LiveView servers. Multiple alerts can belong to an alert group, and multiple alert groups can be specified in a LiveView cluster.

Each server supports exactly one alert group. On startup, the first node becomes the master (this is normal server behavior even with no clustering) for alerts configured in its alert group. When other LiveView nodes in the same alert group start up, they join the cluster and become hot spares. From an alert group perspective, the current master remains in that role until it is removed from the cluster. All servers issue the alert queries for all alerts in its group, and hot spares put the alerts into a local queue and expect to see the alerts being published from the master. Additionally, since configuration of which servers are in alert groups is done at install time, there can be multiple masters — each with their own alert group and behaving as described above.

Alert actions are executed by one LiveView server (the master) in an alert group. Other servers in the cluster (that are in the alert group) serve as hot spares and do not execute alert actions; they simply verify the master's presence, waiting to take over if the master goes down.

Configuration of LiveView cluster-wide alerting is bound by the following behavior:

  • LiveView servers can belong to exactly one alert group

  • The front end LiveView server must be in its own alert group

    Front end servers do not have direct access to the tables in the data layer, and cannot execute alerts on those tables. If a front end server is not in its own alert group, and it becomes the master, it will not execute any alerts associated with the user tables.

  • LiveView servers participating in the alert group must be homogeneous, meaning their configuration and required resources must be identical. This includes:

    • Alerts must be executable on every server

    • Identical LiveView tables

    • Identical LiveView metadata store

If an alert is not assigned to a specific alert group, it is assigned to the default alert group (which is configured with the empty string: null or "").

Server Configuration

It is a best practice using the default alert group for the vast majority of use cases. The default alert group requires no additional configuration; all backend servers in the cluster will simply process alerts.

For the non-default alert group case, you can specify multiple alert groups. In this scenario, you designate which nodes belong to each alert group via the node name. The default alert group is where you do not add the separator in the node name. However, suppose you name your node foo-ag_bar.cluster. In this example bar is the alert group the node is in, as it is between the separator and the first dot ".". If you name your node foo.cluster, your node is in the default alert group. Note that alert group behavior described above still applies.

You must specify:

  • the alert group

  • the privileges to allow viewing the alert, if authentication is enabled.

LiveView Engine Configuration

Use to:

  • define the metadata store required for alert group setup. If you specify a JDBC metadata store, you must also specify an EventFlow JDBC Data Source Group Configuration file , as shown in the LiveView Metadata Store metadata store configuration section .

Role to Privileges Mappings Configuration

If authentication is enabled, use to:

  • define LiveView privileges for alerts.

  • define LiveView privileges to set and get alert groups (non-default scenario).

Place your configuration files in the src/main/configurations folder of your node's LiveView project.

LiveView Engine Configuration

Use a LiveView engine HOCON configuration file to:

  • optionally set a system property to define the alert group separator (non-default scenario)

  • specify the metadata store to enable resource (alerts) sharing

You must configure each LiveView server to use the same metadata store type (must be either JDBC or TRANSACTIONAL_MEMORY) in a LiveView Engine configuration file (the default store type is H2).

See the metadata store configuration metadata store for details.

This is the default alert group name separator:

name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
  LDMEngine = {
        systemProperties = {    
      "liveview.alerts.voting.dynamicpartition.pattern.suffix" = "-ag_" 
      
      }
    }
  }

Alert Group Separator Rules:

  • the separator must not be empty

  • changing the separator value's default of "-ag_" is not recommended.

  • the contents of the separator value can be {upper or lower case letters, 0 to 9, _ and –}. Note the last character option is a hyphen and not an en nor em dash.

  • the same separator must be used across the entire cluster

  • alert group names with dots in them are not supported.

Client Configuration

This section discusses LiveView client-side alert rule configuration, which is required when setting up alerts across a cluster of LiveView nodes.

As mentioned previously, you can always set an alert rule to run in the default alert group, which is a best practice. For all other cases, when authentication is enabled, you must have permissions to set the specific alert group.

If you provide an unknown alert group name, LiveView assumes the name is correct (for example, for a future node of that name that may join the cluster).

If you configure an alert rule alert group with a star "*" the alert runs the actions on every node in the cluster.

The "other special alert group name" is the node name itself. Configuring the alert rule to run on the node name runs the alert on that one node only.

The following shows alert rule configuration for an alert group using lv-client:

lv-client
...
addalertrule --alertgroup myalertgroup