Contents
This page describes how to connect to multiple LiveView servers for table high availability.
LiveView supports configuring author-time, high-availability table groups that are backed by multiple LiveView instances and that automatically reissue queries from failed servers or servers that were administratively taken down. A table group is defined as a group of servers containing the same set of tables.
For example, in a scenario where the front end layer is configured with table1 on server1 and server2: If the query is on server1 and server1 goes down, the client will automatically obtain the query from server2. The client receives a new snapshot.
Using remote LiveView connections with services only allows server-level balance with table granularity.
Server setup options for high-availability table groups are:
-
Configuring the
tableGroup
property for servers that will dynamically participate in the table group.-
The LiveView High Availability sample is an example of LiveView's high availability table feature that allows multiple LiveView instances to be in a high-availability setup.
-
-
Configuring multiple
uri
parameters for servers that will statically participate in a high availability table.-
The LiveView Services Only sample is an example of a remote LiveView server instance. Try loading this sample in StreamBase Studio and experimenting with configuring multiple URIs.
-
-
Configuring
tableGroup
property anduri
parameter, with the following behavior:-
If you add a URI for a server that is not yet participating in the table group, it will be prepended to the list in the table group.
-
If you enter a URI for a server that is already configured in a table group, then the server will get an undefined amount of load (between "1" and twice the load). If you restart the server, after the front end is up, then the restarted server's load reverts to "1".
-
See the next sections for configuration options.
Every server in a table group should have one of the LiveView recovery protocols configured to ensure all table data is consistent.
- Table group configuration for peer-based recovery
-
Set the peer-based server by:
-
defining the
table-space id
in a Table Space lvconf file, -
setting the
tableGroup
property in an LDM engine HOCON file.
Table Space lvconf file example:
<liveview-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.streambase.com/schemas/lvconf/"> <table-space id="mytablespace"> <persistence peer-uri-list="*" restore-data-on-start="true"/> </table-space> </liveview-configuration>
LDM Engine configuration file example:
name = "myldmengine" version = "1.0.0" type = "com.tibco.ep.ldm.configuration.ldmengine" configuration = { LDMEngine = { ldm = { tableGroup = "myTableGroup" } metadataStore = { storeType = "TRANSACTIONAL_MEMORY" } } ...
Note
Prior to Release 10.6.0, the table group for peer-based recovery was also set in the Table Space lvconf file. This method is deprecated in 10.6.0 and shown below for reference:
<liveview-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.streambase.com/schemas/lvconf/"> <table-space id="mytablespace" table-group="mytablegroup"> <persistence peer-uri-list="*" restore-data-on-start="true"/> </table-space> </liveview-configuration>
-
- Table group configuration for log-based recovery
-
Set the log-based recovery server by:
-
defining the table-space id and the folder where persisted data will be sent, in a Table Space lvconf file.
-
setting the
tableGroup
property in an LDM engine HOCON file.
Table Space lvconf file example:
<liveview-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.streambase.com/schemas/lvconf/"> <table-space id="mytablespace"> <persistence persist-data="true" folder="myfolder" restore-data-on-start="true"/> </table-space> </liveview-configuration>
LDM Engine configuration file example:
name = "myldmengine" version = "1.0.0" type = "com.tibco.ep.ldm.configuration.ldmengine" configuration = { LDMEngine = { ldm = { tableGroup = "myTableGroup" } metadataStore = { storeType = "TRANSACTIONAL_MEMORY" } ...
Next, set the front end servers to match the table group defined in the recovery server using an External Server Connection lvconf file. For example:
<liveview-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.streambase.com/schemas/lvconf/"> <external-server-connection id="RemoteLiveview" type="LiveView"> <parameters> <parameter key="table-group">mytablegroup</parameter> </parameters> </external-server-connection> </liveview-configuration>
Note
Prior to Release 10.6.0, the table group for log-based recovery was also set in the Table Space lvconf file. This lvconf file method is deprecated in 10.6.0 and shown below for reference:
<liveview-configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.streambase.com/schemas/lvconf/"> <table-space id="mytablespace" table-group="mytablegroup"> <persistence persist-data="true" folder="myfolder" restore-data-on-start="true"/> </table-space> </liveview-configuration>
-
For the URI configuration option, include an additional uri
parameter per LiveView server that you want to participate in the
table group, as described below.
<parameters> <parameter key="uri">lv://localhost:10085/</parameter> <parameter key="uri">lv://localhost:10086/</parameter> <parameter key="uri">lv://localhost:10087/</parameter> </parameters>
You can also set one of the following strategy
options for your table group via the
External Server Connection lvconf file.
-
Strategy =
balanced
-
This configuration manually specifies the default balanced strategy. This takes the optional
forced-balanced
key, (default = false), which attempts to always ensure the systems are balanced when a server is added or when a fallen server is brought back up.<parameters> <parameter key="uri">lv://localhost:10085/</parameter> <parameter key="uri">lv://localhost:10086/</parameter> <parameter key="uri">lv://localhost:10087/</parameter> <parameter key="strategy">balanced</parameter> </parameters>
-
Strategy =
preferred
, orpreferred-server
(both behave identically) -
This configuration uses the
preferred
strategy but keeps existing queries on the server where they originated when the preferred server comes back up, thereby avoiding unnecessary snapshotting. When themove-to-preferred
key is set totrue
, the first preferred server in the list is used.<parameters> <parameter key="uri">lv://localhost:10085/</parameter> <parameter key="uri">lv://localhost:10086/</parameter> <parameter key="uri">lv://localhost:10087/</parameter> <parameter key="strategy">preferred</parameter> <parameter key="preferred">lv://localhost:10085/</parameter> <parameter key="preferred">lv://localhost:10087/</parameter> <parameter key="move-to-preferred">false</parameter> </parameters>
-
Strategy =
session-aware-balanced
-
This configuration is the same as
balanced
but keeps queries from the same session on the same server when possible.<parameters> <parameter key="uri">lv://localhost:10085/</parameter> <parameter key="uri">lv://localhost:10086/</parameter> <parameter key="uri">lv://localhost:10087/</parameter> <parameter key="strategy">session-aware-balanced</parameter> </parameters>
-
Strategy =
custom
-
Unlimited possible combinations. Using a
custom
strategy may require a custom API, depending on how your LiveView servers are configured. Thecustom
strategy is generally not recommended.
The schema with the most backing servers "wins" if a schema conflict occurs. Updates occur whenever a new table joins or leaves, so restarting all back ends over time with a new schema will eventually reissue all queries with the new schema. For example, in the case of five LiveView servers with schemas (A,B,C,B,D, respectively), only the tables that have schema B are used (as there are two of them, and only one server apiece using the other schemas). If the server with schema D is brought down and brought back up as schema C, then the front end will enter the TABLE_IN_PURGATORY state, where no new queries are allowed to run, but old queries can continue. This occurs because there are now two sets of schemas on the same number of servers: A has one, B has two, and C has two.
If the server with schema A is brought down and back up with schema C, then all queries that were running on the B schema servers will fail over to the C schema servers (since there are now three, "beating" B with only two servers). The servers then exit the TABLE_IN_PURGATORY state.
Additionally, a server pool cannot have some servers that are less than or equal to
Release n.m
AND some servers that are
greater than or equal to Release n.m+x
.
If the query originated on the failed server, then it would first see a new snapshot and then data as normal. The TABLE_IN_PURGATORY state only applies to new queries, and existing queries continue to run until the server exits the TABLE_IN_PURGATORY state. After, a new snapshot appears with the new schema (or an error if the query does not run with the new schema). Manually re-running the same query is not required, though you may have to execute an updated query if the schema changes the validity of your query.