Multiple Remote Data Layer Back Ends

This page describes how to connect to multiple Live Datamart servers as back ends.

Multiple Remote Data Layer Back Ends Overview

LiveView supports configuring author-time high-availability tables that are backed by multiple Live Datamart instances and that automatically reissue queries from failed servers.

For example, in a scenario where the front-end layer is configured with table1 on server1 and server2: If the query is on server1 and server1 goes down, the client can obtain the query from server2. The client receives a new snapshot.

Using remote LiveView connections with services only allows server-level balance with table granularity.

Configuring the lvconf file requires setting up multiple uri parameters and some strategy-dependent configuration. In the simple case, you can include an additional URI per Live Datamart server, as described below.

<parameters>
  <parameter key="uri">lv://localhost:10085/</parameter>
  <parameter key="uri">lv://localhost:10086/</parameter>
  <parameter key="uri">lv://localhost:10087/</parameter>
</parameters>

The LiveView Services Only sample is an example of a remote LiveView server instance. Try loading this sample in StreamBase Studio and experimenting with configuring multiple URIs.

Multiple Back End Configuration Strategy

Set the uri parameter to configure multiple Live Datamart server back ends, using one of the following strategy options.

Strategy = balanced

This configuration manually specifies the default balanced strategy. This takes the optional key of forced-balanced, (default = false), which attempts to always ensure the systems are balanced when a server is added or when a fallen server is brought back up.

<parameters>
  <parameter key="uri">lv://localhost:10085/</parameter>
  <parameter key="uri">lv://localhost:10086/</parameter>
  <parameter key="uri">lv://localhost:10087/</parameter>
  <parameter key="strategy">balanced</parameter>
</parameters>
Strategy = preferred, or preferred-server (both behave identically)

This configuration uses the preferred strategy but keeps existing queries on the server where they originated when the preferred server comes back up, thereby avoiding unnecessary snapshotting. When the move-to-preferred key is set to true, the first preferred server in the list is used.

<parameters>
  <parameter key="uri">lv://localhost:10085/</parameter>
  <parameter key="uri">lv://localhost:10086/</parameter>
  <parameter key="uri">lv://localhost:10087/</parameter>
  <parameter key="strategy">preferred</parameter>
  <parameter key="preferred">lv://localhost:10085/</parameter>
  <parameter key="preferred">lv://localhost:10087/</parameter>
  <parameter key="move-to-preferred">false</parameter>
</parameters>
Strategy = custom

Unlimited possible combinations. Using a custom strategy may require a custom API, depending on how your Live Datamart servers are configured. The custom strategy is generally not recommended.

Limitations

The schema with the most backing servers "wins" if a schema conflict occurs. Updates occur whenever a new table joins or leaves, so restarting all back ends over time with a new schema will eventually reissue all queries with the new schema. For example, in the case of five Live Datamart servers with schemas (A,B,C,B,D, respectively), only the tables that have schema B are used (as there are two of them, and only one server apiece using the other schemas). If the server with schema D is brought down and brought back up as schema C, then the front end will enter the TABLE_IN_PURGATORY state, where no new queries are allowed to run, but old queries can continue. This occurs because there are now two sets of schemas on the same number of servers: A has one, B has two, and C has two.

If the server with schema A is brought down and back up with schema C, then all queries that were running on the B schema servers will fail over to the C schema servers (since there are now three, "beating" B with only two servers). The servers then exit the TABLE_IN_PURGATORY state.

Additionally, a server pool cannot have some servers that are less than or equal to Release 2.1 AND some servers that are greater than or equal to Release 2.2.

When Failover Occurs

If the query originated on the failed server, then it would first see a new snapshot and then data as normal. The TABLE_IN_PURGATORY state only applies to new queries, and existing queries continue to run until the server exits the TABLE_IN_PURGATORY state. After, a new snapshot appears with the new schema (or an error if the query does not run with the new schema). Manually re-executing the same query is not required, though you may have to execute an updated query if the schema changes the validity of your query.