Contents
LiveView servers can be configured to share metadata resources when they use the same metadata store and are in the same cluster.
Upon server startup, the first deployed LiveView node containing a JSON-formatted resource file in its src/main/resources
folder is read, initialized, and loaded into the metadata store. Other nodes never load the metadata. The metadata is loaded
to the metadata store only once in the cluster's lifetime.
A LiveView node may contain at most one JSON-formatted resource file in its resources directory, named lvmetadata.json
. Prior to 10.5.0, the default file name was resources.json
.
By default, LiveView uses a local file-based store backed by SQLite. LiveView supports the following metadata store types:
Store type | Resource sharing supported | Store default | Resource loading during startup via at-startup resource (lvmetadata.json) file supported |
---|---|---|---|
LOCAL | No | Yes | No |
H2 | No | No | No |
TRANSACTIONAL_MEMORY | Yes | No | Yes |
JDBC | Yes | No | No |
The LiveView client API import and export methods can be used to import resources from one node to the other:
-
Importing requires servers of identical version.
-
Exporting resources is useful for backup purposes in the unlikely event a node or cluster fails and resources must be restored.
Shared resources between LiveView servers include:
-
LiveView alerts.
-
LiveView workspaces.
-
LiveView Web resources. For supported resources, refer to the LiveView Web documentation.
The following describes options for sharing resources between nodes when all nodes in the cluster contain the same metadata store type, and all servers are the same version.
- Exporting data out of LiveView
-
Given a server,
LiveView1
, use the Java API or the lv-client export command to export resources into a file (for example, a file calledmyconfiguration.json
). Configuration data fromLiveView1
is then serialized and stored in a file. - Importing the data using a command (option 1)
-
Given another server,
LiveView2
, use the import command to importmyconfiguration.json
toLiveView2
. Note that this option works once theLiveView2
is up and running. Note thatLiveView2
must be the same version asLiveView1
. - Importing the data using at-startup memory resources when using transactional memory
-
Before the LiveView server starts: this option runs during startup. The LiveView server first initializes the data and then comes up. When using the transactional memory store type, you can load metadata into the server at startup. This only succeeds if the starting node is the
very first
node in the cluster. When subsequent nodes start they identify they arenot
the first note and donot
reload metadata.Place the specified file in the
src/main/resources
folder of the LiveView fragment that will becomeLiveView2
.
Setting up resource sharing requires the following HOCON configuration files, which you must place in the src/main/configurations
folder of your node's LiveView project.
- LiveView Engine
-
Use to:
-
set the
storeType
property for the LiveView nodes in the cluster whennot
using the default local store (H2, JDBC, or transactional memory). -
optionally specify a system property to rename the default resource file.
-
- EventFlow JDBC Data Source Group
-
Use only to define the JDBC data store type when the LiveView Engine configuration file's
storeType
property is set toJDBC
. - Role to Privileges Mappings
-
If authentication is enabled, and only if you want to import and export data, use this file to map a user role to the necessary privileges to allow importing and exporting resources between LiveView servers.
You must configure each server with the same storeType
type in a LiveView Engine configuration file.
The example below defines the storeType
property with a value of JDBC and uses the jdbcDataSource
property to define a source called myJDBCSource
. If you define a JDBC data source in the engine configuration file, you must use the same JDBC data source value in a JDBC
configuration file (see below).
Setting the storeType
to LOCAL, TRANSACTIONAL_MEMORY, or H2 does not require additional metadata store configuration.
name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
LDMEngine = {
ldm = {
metadataStore = {
storeType = "JDBC"
jdbcDataSource = "myJDBCSource"
jdbcMetadataTableNamePrefix = "LV_CLUSTER1_"
}
}
}
}
To optionally rename the default lvmetadata.json
file, set the liveview.metadata.resource.file.name
system property as shown in the following example:
name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
LDMEngine = {
ldm = {
...
systemProperties = {
"liveview.metadata.resource.file.name" = "myconfiguration
.json" }
...
Configure an EventFlow JDBC Data Source Group configuration file when the LiveView engine configuration file specifies a JDBC storeType
(see above).
A JDBC configuration file can define multiple JDBC data sources. To enable resource sharing between LiveView servers, your
JDBC configuration file must contain at least one data source, and the value must match the value defined in the engine configuration
file. This example defines one JDBC source called myJDBCSource
.
name = "mydatasource"
version = "1.0.0"
type = "com.tibco.ep.streambase.configuration.jdbcdatasource"
configuration = {
JDBCDataSourceGroup = {
associatedWithEngines = [ "javaengine", "otherengine[0-9]" ]
jdbcDataSources = {
myJDBCSource = {
...
}
}
}
}
If authentication is enabled, and only if you want to import and export data, the Role to Privileges Mappings configuration file is required to allow importing and exporting resources between LiveView nodes. Your configuration file must define a user that includes the following or higher-level privileges:
- LiveViewMetadataImport
-
Enables importing data into the LiveView server.
- LiveViewMetadataExport
-
Enables exporting data using from the LiveView server.
In practice you often define users with multiple LiveView privileges, depending on user role. The following example defines a user with all LiveView privileges, which includes both privileges described above.
name = "my-role-mappings" version = "1.0.0" type = "com.tibco.ep.dtm.configuration.security" configuration = { RoleToPrivilegeMappings = { roles = { admin = { privileges = [ { privilege = "LiveViewAll" } ] } } } }
Use Java API or LiveView lv-client importmetadata and exportmetadata commands to import and export resources between LiveView servers (for non at-startup memory option).
The following steps describe how to migrate resources from older to newer LiveView servers.
Use the following to migrate resources when your server's metadata store type is JDBC or transactional memory (in this example,
for server with arbitrary version x.y
):
-
Export your server's metadata to
lvmetadata.json
. -
Start server
x.y
with the default local store configured. It will be empty. -
Import
lvmetadata.json
to serverx.y
. -
Move server
x.y
'sliveview.sqlite.db
to your new server,x.z
. -
Start server
x.z
and exportlvmetadata.json
to serverx.z
. -
Start server
x.z
again and importlvmetadata.json
to serverx.z
.
Use the following steps to migrate resources from older to newer LiveView servers already configured with a local store:
-
Export the server
x.y
metadata tolvmetadata.json
. -
Start the server
x.z
with the default local metadata store. The server will start empty. -
Copy the
liveview.sqlite.db
[from your LiveView project'ssrc/main/resources
directory] from serverx.y
to serverx.z
. -
Start server
x.z
again, then export the metadata to a file. -
Change the metadata store type to JDBC or transactional memory on server
x.z
, then import thelvmetadata.json
file.
The transactional memory metadata store type is only supported on
homogeneous
LiveView clusters (meaning the cluster cannot contain
a mix of LiveView and StreamBase fragments when using a transactional memory metadata store).
It is a best practice using either the JDBC or local metadata store type when your cluster
contains both LiveView and StreamBase fragments.