====================================================================== LiveView Peer-Based Recovery Sample for TCM eFTL Message Bus ====================================================================== Overview: This sample demonstrates the LiveView delayed peer-based recovery capability when using the eFTL message bus service. TIBCO Cloud Messaging (TCM) can optionally use eFTL. See the TIBCO Streaming eFTL adapter documentation to ensure the eFTL dependencies are properly configured. Message buses such as Kafka support replaying older messages based on an offset, or sequence number. LiveView recovery with such buses is fairly simple and does not need the delayed recovery capability. See the Kafka recovery sample for more information. Buses such as eFTL and FTL, even when configured with persistence, do not support playing older messages that were not previously received and unacknowledged. eFTL and FTL do provide monotonically increasing storage message ids which can use used to uniquely identify messages. For buses of this kind, LiveView can use delayed peer-based recovery with perfect fidelity as demonstrated in this sample. To avoid missing data from eFTL while recovery is in progress, the sample must first subscribe to the eFTL channel. Only after the subscription is successful will the sample start recovering data from peer nodes, and storing incoming eFTL data, if any, while peer recovery is ongoing. During startup, a LiveView node looks for peer nodes in the same cluster and starts the delayed recovery process if there is at least one peer node available. After peer recovery is complete, retained data from the eFTL subscription that have higher eFTL storage message ids are sent to the LiveView table and publishing transitions to the real-time data coming from the bus. It's the eFTL monotonically increasing storage message id that allows LiveView to not skip any data and also not publish the same data twice. Sample Requirements: To run this sample, an available eFTL service is required. The sample uses eFTL 6.5, and assumes the eFTL server is running at ws://localhost:8585 by default. To set up a local eFTL/FTL service, make sure you have FTL 6.5 and eFTL 6.5 installed. The FTL installation directory (such as /opt/tibco/ftl/6.5/samples) contains a samples directory with a readme.txt that documents how to start an eFTL server on the default port. Starting an eFTL server is required to run this sample. For more information about eFTL configurations, see https://docs.tibco.com/products/tibco-eftl-enterprise-edition-6-5-0 Running the Sample: After starting the eFTL/FTL service, the next step is to publish messages to the service: Right-click on lv_sample_eft_message_publisher and run it as an EventFlow Fragment. Only one publisher instance is needed. Once the eFTL service and publisher are running: 1) Go to Run > Run Configurations and create a new LiveView fragment run configuration. 2) Click Browse and select this sample project as the LiveView Project. 3) Set the LiveView engine port to 10080. 4) Make sure only engine.conf is selected as the configuration file (by default every file is selected). 5) Go to the Node tab and enter peer1$(SeqNum:_) as the node name. 6) Click Run. The above steps start the first node in the cluster. Repeat the steps for the peer node, but set the LiveView engine port to 10081 and use peer2$(SeqNum:_) for the node name. Expect to see data available on both localhost:10080 and localhost:10081. Sample Script Details: The script lv-test-consistency.sh compares data between two Data layer (DL) servers running in the cluster. To validate that both DLs have indentical data. The script lv-test-no-duplicates.sh checks whether there is any row that is published more than once on the DLs. The ItemsSales table contains the field updateCount that increments by 1 every time the row is published. The script validates that no row has an updateCount greater than 1. Running the servers as installed applications: There are deploy projects included in the sample to make it easier to create deployable applications. For details on building, installing and starting applications, see the StreamBase documentation.