LiveView Metadata Store

Overview

LiveView servers can be configured to share metadata resources when they use the same metadata store and are in the same cluster.

Upon server startup, the first deployed LiveView node containing a JSON-formatted resource file in its src/main/resources folder is read, initialized, and loaded into the metadata store. Other nodes never load the metadata. The metadata is loaded to the metadata store only once in the cluster's lifetime.

A LiveView node may contain at most one JSON-formatted resource file in its resources directory, named lvmetadata.json. Prior to 10.5.0, the default file name was resources.json.

By default, LiveView uses a local file-based store backed by SQLite. LiveView supports the following metadata store types:

Store type Resource sharing supported Store default Resource loading during startup via at-startup resource (lvmetadata.json) file supported
LOCAL No Yes No
H2 No No No
TRANSACTIONAL_MEMORY Yes No Yes
JDBC Yes No No

Import/Export Considerations

The LiveView client API import and export methods can be used to import resources from one node to the other:

  • Importing requires servers of identical version.

  • Exporting resources is useful for backup purposes in the unlikely event a node or cluster fails and resources must be restored.

Resources Shared when Using Transactional Memory or JDBC

Shared resources between LiveView servers include:

Resource Sharing Workflow

The following describes options for sharing resources between nodes when all nodes in the cluster contain the same metadata store type, and all servers are the same version.

Exporting data out of LiveView

Given a server, LiveView1, use the Java API or the lv-client export command to export resources into a file (for example, a file called myconfiguration.json). Configuration data from LiveView1 is then serialized and stored in a file.

Importing the data using a command (option 1)

Given another server, LiveView2, use the import command to import myconfiguration.json to LiveView2. Note that this option works once the LiveView2 is up and running. Note that LiveView2 must be the same version as LiveView1.

Importing the data using at-startup memory resources when using transactional memory

Before the LiveView server starts: this option runs during startup. The LiveView server first initializes the data and then comes up. When using the transactional memory store type, you can load metadata into the server at startup. This only succeeds if the starting node is the very first node in the cluster. When subsequent nodes start they identify they are not the first note and do not reload metadata.

Place the specified file in the src/main/resources folder of the LiveView fragment that will become LiveView2.

Configuration

Setting up resource sharing requires the following HOCON configuration files, which you must place in the src/main/configurations folder of your node's LiveView project.

LiveView Engine

Use to:

  • set the storeType property for the LiveView nodes in the cluster when not using the default local store (H2, JDBC, or transactional memory).

  • optionally specify a system property to rename the default resource file.

EventFlow JDBC Data Source Group

Use only to define the JDBC data store type when the LiveView Engine configuration file's storeType property is set to JDBC.

Role to Privileges Mappings

If authentication is enabled, and only if you want to import and export data, use this file to map a user role to the necessary privileges to allow importing and exporting resources between LiveView servers.

Engine Configuration

You must configure each server with the same storeType type in a LiveView Engine configuration file.

The example below defines the storeType property with a value of JDBC and uses the jdbcDataSource property to define a source called myJDBCSource. If you define a JDBC data source in the engine configuration file, you must use the same JDBC data source value in a JDBC configuration file (see below).

Setting the storeType to LOCAL, TRANSACTIONAL_MEMORY, or H2 does not require additional metadata store configuration.

name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
  LDMEngine = {
    ldm = {
      metadataStore = {
        storeType = "JDBC" 
        jdbcDataSource = "myJDBCSource"        
        jdbcMetadataTableNamePrefix = "LV_CLUSTER1_"       
      }
    }
  }
}

Optional Resource File Renaming

To optionally rename the default lvmetadata.json file, set the liveview.metadata.resource.file.name system property as shown in the following example:

name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
  LDMEngine = {
    ldm = {
    ...
    systemProperties = { 
      "liveview.metadata.resource.file.name" = "myconfiguration.json" }
    ...

JDBC Configuration

Configure an EventFlow JDBC Data Source Group configuration file when the LiveView engine configuration file specifies a JDBC storeType (see above).

A JDBC configuration file can define multiple JDBC data sources. To enable resource sharing between LiveView servers, your JDBC configuration file must contain at least one data source, and the value must match the value defined in the engine configuration file. This example defines one JDBC source called myJDBCSource.

name = "mydatasource"
version = "1.0.0"
type = "com.tibco.ep.streambase.configuration.jdbcdatasource"
configuration = {
  JDBCDataSourceGroup = {
    associatedWithEngines = [ "javaengine", "otherengine[0-9]" ]
    jdbcDataSources = {    
      myJDBCSource = {    
        ...
      }
    }
  }
}

Privileges Configuration

If authentication is enabled, and only if you want to import and export data, the Role to Privileges Mappings configuration file is required to allow importing and exporting resources between LiveView nodes. Your configuration file must define a user that includes the following or higher-level privileges:

LiveViewMetadataImport

Enables importing data into the LiveView server.

LiveViewMetadataExport

Enables exporting data using from the LiveView server.

In practice you often define users with multiple LiveView privileges, depending on user role. The following example defines a user with all LiveView privileges, which includes both privileges described above.

name = "my-role-mappings"
version = "1.0.0"
type = "com.tibco.ep.dtm.configuration.security"
configuration = {
  RoleToPrivilegeMappings = {
    roles = {
      admin = {
        privileges = [
        {
          privilege = "LiveViewAll"
        }
        ]
  }
    }
  }
}

Commands

Use Java API or LiveView lv-client importmetadata and exportmetadata commands to import and export resources between LiveView servers (for non at-startup memory option).

Migration

The following steps describe how to migrate resources from older to newer LiveView servers.

Use the following to migrate resources when your server's metadata store type is JDBC or transactional memory (in this example, for server with arbitrary version x.y):

  1. Export your server's metadata to lvmetadata.json.

  2. Start server x.y with the default local store configured. It will be empty.

  3. Import lvmetadata.json to server x.y.

  4. Move server x.y's liveview.sqlite.db to your new server, x.z.

  5. Start server x.z and export lvmetadata.json to server x.z.

  6. Start server x.z again and import lvmetadata.json to server x.z.

Use the following steps to migrate resources from older to newer LiveView servers already configured with a local store:

  1. Export the server x.y metadata to lvmetadata.json.

  2. Start the server x.z with the default local metadata store. The server will start empty.

  3. Copy the liveview.sqlite.db [from your LiveView project's src/main/resources directory] from server x.y to server x.z.

  4. Start server x.z again, then export the metadata to a file.

  5. Change the metadata store type to JDBC or transactional memory on server x.z, then import the lvmetadata.json file.

Restrictions

The transactional memory metadata store type is only supported on homogeneous LiveView clusters (meaning the cluster cannot contain a mix of LiveView and StreamBase fragments when using a transactional memory metadata store). It is a best practice using either the JDBC or local metadata store type when your cluster contains both LiveView and StreamBase fragments.