Contents
By default, user accounts on Linux have a setting equivalent to ulimit -n 1024
, which caps the maximum number of
concurrently open file descriptors to 1,024 for all processes launched concurrently
by that user. However, LiveView Server opens a new file descriptor for each new
services-layer connection. The upper boundary set by LiveView is 3000, which is
higher than the default per-user Linux value.
You may want to increase the ulimit
setting for the user
accounts running LiveView Server, to allow for up to 3000 connections allowed by
LiveView, plus an additional number to account for network socket connections held
open concurrently by StreamBase EventFlow applications in the same server, plus all
other concurrent processes. A good starting value when running LiveView Server on
Linux is to specify ulimit -n 4096
for the
user account that runs the server. You may need to adjust this number higher.
By default, LiveView and StreamBase use the platform's default character encoding when reading files or communicating over certain network protocols. If you require another character encoding, you can change the default settings as follows. This example uses UTF-8 as the new character encoding.
To set the file encoding for StreamBase Studio, edit the system properties for
file.encoding and streambase.tuple-charset. These properties are set locally in a
project's configuration file of HOCON type com.tibco.ep.streambase.configuration.ldmengine
.
Edit the ldmengine.conf
file for your project as
shown:
name = "myldmengine" version = "1.0.0" type = "com.tibco.ep.ldm.configuration.ldmengine" configuration = { LDMEngine = { systemProperties = {"file.encoding" = "UTF-8", "streambase.tuple-charset" = "UTF-8"} } }
This tells all StreamBase processes to emit tuples in UTF-8 encoding. Note that if you specify file.encoding in the system property, as in the example above, you must also set it to UTF-8.
To set the default Java encoding for reading character data from files, set the environment variable STREAMBASE_STUDIO_VMARGS. In Windows, you can do this at the StreamBase Command Prompt with the set command (this one-line command is shown on two lines for clarity):
set STREAMBASE_STUDIO_VMARGS=-Dfile.encoding=UTF-8 -Dstreambase.tuple-charset=UTF-8 -Xms256m -Xmx1500M
This environment variable changes the default Java encoding when reading character data from files (file.encoding) and instructing the API to transfer character data as UTF-8 (streambase.tuple-charset). Start StreamBase Studio with the environment variables you just changed by using the command-line tool from the same StreamBase Command Prompt:
sbstudio
Once the STREAMBASE_STUDIO_VMARGS variable is set in StreamBase Studio, the character encoding for the LiveView server will also be set to the new encoding.
You can disable alert rules with a system property so configured alert rules will not run when LiveView starts. This is useful if you have created an alert rule that might damage your system. The system properties liveview.alert.register.startup and liveview.alert.enabled control whether a project's alerts are registered and whether the alert service starts when the server starts. By default, these properties are set to true. If you set these properties to false, you can start LiveView Server without your configured alerts, and still use StreamBase Studio to edit or disable the alerts. To set either of these properties to false, follow these steps:
-
Right-click in the project's root directory and select
> . This opens the dialog. -
Select the root directory for your project.
-
Select the LDMEngine type under LiveView Configuration Types and enter a name for the file.
-
Click
. This creates a configuration file of HOCON typecom.tibco.ep.streambase.configuration.ldmengine
and opens it in the HOCON file editor. -
To prevent configured alerts from being registered at startup, add the following system property under the LDMEngine root object:
name = "myldmengine" version = "1.0.0" type = "com.tibco.ep.ldm.configuration.ldmengine" configuration = { LDMEngine = { systemProperties = {"liveview.alert.register.startup" = "false"} } }
Setting this property to false means that configured alerts are not registered. This means that you can start the server and edit or disable the problem alerts.
-
To prevent the alert service from starting, append the following text into the list of properties under
systemProperties
, or if you did not add a property in the previous step, includesystemProperties
as well:"liveview.alert.enabled" = "false"
LiveView Server has the ability to process table queries in parallel, but does not do so by default. You can customize the parallelization and concurrency for each table to improve query performance. However, it is possible to over-specify these features and thereby overload the hardware that runs your LiveView Server instance. Thus, you must use the configuration elements described here with caution and careful testing.
In the lvconf file that defines each LiveView table, there are two attributes of the
table's root element, snapshot-parallelism
and
snapshot-concurrency
, that control the parallelism and
concurrency of query processing. You can also specify these settings in a Table Space
type table, then specify that Table Space in a <table-space-ref>
element in the lvconf for some of the data
tables in your project.
Snapshot parallelism determines the number of data regions
used in parallel to publish to and scan from tables. Each data region contains
approximately 1/N of the total rows in the
table, where N is the snapshot-parallelism
value. Use snapshot parallelism to improve load
performance and query performance where a query needs to scan many rows of a table.
In general, a higher snapshot-parallelism
value means
that individual table scan snapshot queries can run faster — if your server has
available CPU cores to support the value you specify.
When snapshot-parallelism
is set above 1, LiveView uses
a hashing algorithm to direct incoming tuples to consistent regions. The default
algorithm is based on Java's hashcode(), which is fast, but can cause poor balancing
for some particular publishing patterns. An optional more robust hashing algorithm
(xxhash) can be used to address such rare situations by setting system properties in
a LiveView engine configuration file:
For all LiveView tables, use: liveview.hash.algo.default=native|xxhash
For specific LiveView tables, use: liveview.hash.algo.
yourtablename
=native|xxhash
This example defines a system property for all tables to use the xxhash option:
name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
LDMEngine = {
systemProperties = {"liveview.hash.algo.default" = "xxhash"}
}
}
If the table is not persisted, then you can change hashing algorithms between runs without other impact. If the table is persisted, you have two options ― remove all restore files and start over or use the StreamBase Component Exchange) to change the hash distribution of existing restore files.
method (available through the
Snapshot concurrency specifies the number of extra threads
used to service the snapshot portion of queries. By default, snapshot concurrency is
not enabled, which means that LiveView Server's single data-region thread also
services all snapshot queries. Setting the snapshot-concurrency
attribute to X means there will be X extra independent threads for snapshot processing.
Additional snapshot query threads are most beneficial when you have ad hoc queries
that cannot use indexes and your table size is several hundred thousand rows or
larger.
For example, setting snapshot-concurrency=1
for a data
table results in two threads: one thread is available to run snapshot queries, while
the default data-region thread remains available to handle data being published and
to handle all continuous query processing.
With no snapshot-concurrency setting, or by setting it equal to the default of 0, only one task operates at a time. That task could be an update or a snapshot scan. With snapshot-concurrency set to 2, for example, up to three tasks can run simultaneously. These three tasks would be one publish and continuous query process, and two snapshot table scans.
Your LiveView deployment should have more CPU cores than the aggregate total number
of (snapshot-parallelism
times snapshot-concurrency
). The ideal configuration provides one core for
every (snapshot-parallelism
setting times snapshot-concurrency
setting times the
number of LiveView data tables that will be actively queried). For
snapshot-parallelism=2
and snapshot-concurrency=1
, the ideal LiveView Server would provide 2 *
2 = 4 cores for each active LiveView data
table.
LiveView allows you to control the rate of data outflow from a source table to an
aggregation table. This is called data conflation. The
publish-interval-millis
attribute of the <data-table>
tag, if set, adds latency and reduces volume by
limiting output from the base table to the aggregate table to the specified
publication rate. When set, the update delivers only the current row value (at most
one row per pkey) instead of "as fast as the data arrives." Updates into the base
table still happen at the as fast as data arrives.
For tables with an <aggregation> data-source only (not other data-source types)
you can add conflate-data=true which only changes how often the aggregate-processor
emits results into the table. This period is synchronized with
publish-interval-millis. With the default behavior of conflate-data=false
, the aggregate-processor emits frequently from
the source table. With conflate-data=true
the last
aggregate result per pkey is delivered to the table to be processed by the
<insert-rule>
and <update-rule
> expressions.
LiveView uses JDK 11 by default, whose default garbage collector is the G1 garbage
collector. You can explicitly specify GC parameters in a HOCON configuration file of
type <ldmengine>
. For example:
name = "engine.conf" version = "1.0" type = "com.tibco.ep.ldm.configuration.ldmengine" configuration = { LDMEngine = { jvmArgs = [ "-XX:MaxGCPauseMillis=500" "-XX:ConcGCThreads=1" ] } }
If you specify a different JDK to run LiveView, that JDK's default garbage collector is used. For example, if you use the Azul Zing JDK, it has its own garbage collection implementation.
If you have a site policy that specifies using a different Java garbage collector than the defaults specified above, contact product Support for assistance.
The following system property can affect LiveView deployment behind a proxy.
- liveview.server.xaccelbuffering
-
To set the X-Accel-Buffering HTTP header on HTTP endpoints that return tuples — which is required for certain proxies that buffer by default (such as NGINX when configured for TLS) — set the
liveview.server.xaccelbuffering
system property tono
in a LDM Engine configuration file.Setting the property to
no
means "do not buffer", which is recommended for proxies that buffer. If you do not set it tono
for a proxy that buffers, results will be extremely delayed, or in some cases never delivered.For example:
name= "yourLDMengineconfig" version= "1.0.0" type= "com.tibco.ep.ldm.configuration.ldmengine" configuration = { LDMEngine = { systemProperties = { "liveview.server.xaccelbuffering" = "no" } } }
Additionally:
-
The property has no default value.
-
If you do not set this property, the header will not be present.
-
Setting the property to
yes
results in the header being present with the header value ofyes
.
-
- liveview.server.forwards.use
-
To access the originating IP address of a request when it is forwarded through a proxy, set the
liveview.server.forwards.use
system property totrue
in a LDM Engine configuration file. Setting this option totrue
will display the forwarded IP address in the LVSessions table Host field.The property's default value is false, which means the header value is not used; only the proxy IP address is seen.
For example:
name= "yourLDMengineconfig" version= "1.0.0" type= "com.tibco.ep.ldm.configuration.ldmengine" configuration = { LDMEngine = { systemProperties = { "liveview.server.forwards.use" = "true" } } }