Contents
- StreamBase Container Introduction
- Why Use Containers?
- Why Not Use Containers?
- Container Name Standards
- Containers and StreamBase Path Notation
- The System Container
- Containers and StreamBase URI Syntax
- Multiple URI Syntax
- Addressing Multiple Containers Independently
- Adding Containers
- Removing Containers
- Getting a List of Containers
- Getting More Information About Containers
- Managing Running Modules in Containers
- Enabling and Disabling Enqueue or Dequeue
- Monitoring Container Information
- Client Library Support for Containers
- Unsupported Features
- Related Topics
A StreamBase container is a structure that encapsulates and names a single EventFlow module, complete with Input Stream or adapter entry points and Output Stream or adapter exit points. There is one EventFlow module per container, and each container's name serves as a handle for that module's resources. Multiple containers and modules can be run in a single EventFlow or LiveView engine process, and you can share resources between containers.
Even the simplest EventFlow fragment has at least two StreamBase containers:
-
A container with the default name
default
that holds the top-level EventFlow module running in the fragment. -
A container with the fixed name
system
that emits continuous streams of information and statistics about the currently running server engine. (More on thesystem
container at this link.)
A LiveView fragment has the following containers:
-
The primary, top-level StreamBase container. In the LiveView case, this container has the fixed name
default
. -
The
system
container. -
One container for each configured LiveView data table, where the table name and container name are the same.
-
One container each for the LiveView system tables,
LiveViewStatistics
andLVAlerts
.
You can define, add, and modify containers and their properties for an EventFlow fragment in several ways:
-
At authoring time, by means of configuration using the EventFlowDeployment root object and container object in the
com.tibco.ep.streambase.configuration.sbengine
configuration type. -
Dynamically at the command line, using an epadmin or legacy sbadmin command.
-
In an administration client you write, using the
StreamBaseAdminClient
class.
Using StreamBase containers enables several useful features. You can:
-
Design a system with your primary data analysis in one or more primary containers, but with connections to and from external data feeds isolated into separate, peripheral containers.
-
Configure your project to start with separate containers at node installation time.
-
Dynamically enable or disable peripheral containers while the primary one continues.
-
Configure your project with one or more containers that start in suspended state, or with their enqueuing or dequeuing disabled. Manage the state of these containers dynamically at runtime.
-
Dynamically add or remove subordinate or peripheral EventFlow modules from a running EventFlow engine process.
-
Connect the output of one EventFlow module to the input of another EventFlow module, each running in separate containers within the same engine.
-
Enable and disable a runtime tuple trace file for any container, including
default
. -
Dynamically modify a running container to add or change a container connection.
-
Dynamically manage containers and container connections from the command line with the epadmin or sbadmin commands or programmatically with a custom administration client.
Each container has its own thread of execution within the enclosing EventFlow engine. Containers behave much like operators or modules marked with the Run this component in a parallel region setting on the Concurrency tab of the Properties view. On a multi-core machine, one container does not block the execution of another container.
StreamBase containers are restricted to a single JVM engine. This means:
-
Container management is not cluster aware. In general, this means you must manage container conditions separately for each node.
-
For example, if you dynamically start or stop a container in one node (say, node A), that action does not affect the same-named container in node B.
-
This is also true of dynamically managing enqueue and dequeue state of a module in a container: this must be managed separately for each node in your cluster.
-
-
There is no such thing as a node-to-node container connection. All container connections must run within the confines of a single JVM engine.
-
Nodes running engines that have multiple containers do participate in the cluster's HA and distributed computing system, to the extent that nodes are copies of each other in a StreamBase Runtime cluster. But it is possible for manual container maintenance to allow one node to get out of sync with others in its cluster.
As an alternative to configuration with StreamBase containers, consider using the cluster awareness feature of operators and adapters to let them start and stop based on cluster conditions.
The name you assign to containers must follow the StreamBase identifier naming rules, as described in Identifier Naming Rules.
StreamBase's formal path notation is designed to avoid naming conflicts when you run multiple containters in the same server and when operators are run in separate threads.
StreamBase path notation uses the following syntax, with a period separator between each element:
[containerName
.][moduleRefName
[[.moduleRef2Name
].[..]]entityName
The container name is shown as optional because in most contexts the default
container is used and need not be specified.
When an EventFlow engine process starts up, a StreamBase container named system
is always included alongside the container, usually named
default
, that holds the top-level EventFlow module. The
system container and the streams it emits are described in a separate page, System Container and Control Streams.
The URI syntax for addressing EventFlow modules with legacy sb* commands allows you to connect to a particular container in a running module. The URI syntax is:
sb://hostname
:portnum
/containerName
For example:
sb://localhost:portnum
/holder2
If you leave off the container-name
portion
of the URI, the container named default
is assumed.
Thus, the following commands are equivalent:
sbc -u sb://localhost:portnum
/default sbc -u sb://localhost:portnum
When you specify a container name in the URI, you can omit the container name from any subsequent stream or operator names in the same command. For example, the following commands are equivalent:
sbc -u sb://localhost/holder2 enqueue InputStream1 sbc -u sb://localhost enqueue holder2.InputStream1
See sburi(5) for the details of StreamBase URI syntax.
Some sbc commands that accept StreamBase URIs can specify a comma-separated list of URIs to address two or more EventFlow engines at once. This feature is used primarily to support a cluster of EventFlow engines that are using high availability features.
See Multiple URI Syntax for HA Scenarios for a discussion of this subject.
If a server process has multiple containers, you can enqueue to or dequeue from multiple containers at the same time. For example, you can dequeue with a command like the following:
sbc dequeue holder1.OutputStream1 holder2.OutputStream1
If you specify a container name in the URI, you can still dequeue or enqueue from different containers. For example:
sbc -u sb://localhost/holder1 dequeue OutputStream1 holder2.OutputStream1
OutputStream1
will be dequeued from the module in
container holder1
, while holder2.OutputStream1
will be dequeued from the container
holder2
.
Because tuples are queued between modules, there is overhead introduced when tuples cross modules.
There are several ways to add a container to a running EventFlow engine process:
-
Use the epadmin add container command with this syntax:
epadmin add container --name=
container-name
--module=module-name
-
Use the sbadmin command, targeting a running server process. The format is:
sbadmin addContainer
container-name
module-name
.sbapp -
Use the
StreamBaseAdminClient
class in a custom client module.
You can remove containers from a running server process with:
-
The epadmin remove container command:
epadmin remove container --name=
container-name
-
The sbadmin command, using this syntax:
sbadmin removeContainer
container-name
-
The
StreamBaseAdminClient
class in a custom client module.
When a container is removed:
-
Enqueuer clients that were accessing the module in that container receive an exception:
ContainerManager$NonExistentContainerException
-
Dequeuer clients receive an EOF if they are dequeuing only from that container. If they are connected to multiple containers, then they continue to receive tuples from the other containers.
-
Containers that depend on a removed container for input or output continue to function. Their input or output disappears.
Use the epadmin --ad=adminport
display container
command to show a list of containers, including details about each container. For
example:
Engine = com_tibco_sb_sample_bestbidsandasks_BestBidsAsks0 Path = system Type = SYSTEM Enqueue = ENABLED Dequeue = ENABLED State = RUNNING ... Engine = com_tibco_sb_sample_bestbidsandasks_BestBidsAsks0 Path = BBA Type = NORMAL Enqueue = ENABLED Dequeue = ENABLED State = RUNNING Main Module = com.tibco.sb.sample.bestbidsandasks.BestBidsAsks Data Distribution Policy = Availability Zone = Partition = Backup Nodes = Input Streams = NYSE_Feed Output Streams = BestAsks,BestBids Trace Status = Disabled ...
You can narrow the output to show only the container names with command like the following:
epadmin --ad=adminport
display container | grep "Path =" // macOS or Linux epadmin --ad=adminport
display container | findstr "Path =" // Windows
You can also use the sbc list command:
sbc -u sburi
list container
which produces output like the following example:
container default container holder2 container system
As shown in the previous section, the epadmin display container command shows
several lines of detail about each container. You can add the --detailed
parameter to add the list of referenced modules. For
example:
...
Engine = com_tibco_sb_sample_bestbidsandasks_BestBidsAsks0
Path = BBA
Type = NORMAL
Enqueue = ENABLED
Dequeue = ENABLED
State = RUNNING
Main Module = com.tibco.sb.sample.bestbidsandasks.BestBidsAsks
Referenced Modules = {"Module":"com.tibco.sb.sample.bestbidsandasks.BestBidsAsks",
"Referenced Modules":"none"}
Data Distribution Policy =
Availability Zone =
Partition =
Backup Nodes =
Input Streams = NYSE_Feed
...
You can also use the sbc describe
command, which shows you the container name and any connections specified when that
container was added.
<container name="BBA" enqueue="ENABLED" dequeue="ENABLED" state="RUNNING"> </container>
You can manage multiple container-bound modules running in a single engine process using either epadmin or sbadmin commands.
The epadmin commands are:
epadmin shutdown container --name=container-name
epadmin suspend container --name=container-name
epadmin resume container --name=container-name
epadmin restart container --name=container-name
epadmin modify container --name=container-name
Use modify container to:
-
configure runtime tuple tracing
-
start, stop, or pause enqueuing to or dequeueing from a container, described next
The relevant sbadmin commands are:
sbadmin {shutdown | suspend | resume}container-name
sbadmin restartContainercontainer-name
sbadmin removeContainercontainer-name
sbadmin modifyContainercontainer-name
Use sbadmin modifyContainer to:
-
add a connection between running containers
-
remove a container connection
-
configure runtime tuple tracing
-
start, stop, or pause enqueuing to or dequeueing from a container, described next
You can dynamically start, stop, and pause enqueuing or dequeuing to and from running containers. Use any of the following commands:
epadmin modify container --name=container-name
epadmin add container --name=container-name
sbadmin addContainercontainer-name
sbadmin modifyContainercontainer-name
All four commands take the following options:
--enqueue=STREAMSTATUS
--dequeue=STREAMSTATUS
The values for STREAMSTATUS
can be:
For epadmin | For sbadmin | Description |
---|---|---|
enabled | ENABLED | This is the default. |
disabled | DISABLED | The enqueue or dequeue will actively refuse any enqueue or dequeue requests, and throw an exception. |
droptuples | DROP_TUPLES | The enqueue or dequeue will silently drop tuples. |
When you interrupt queuing for an module running in a container, you also interrupt queuing for any external clients, input adapters, and output adapters that were communicating with that module. The following table explains how various combinations are affected:
Client or Adapter Type | ENABLED | DROP_TUPLES | DISABLED |
---|---|---|---|
Input adapter | Can enqueue | Off | Off |
Enqueuing client | Can enqueue | Tuples silently dropped | Connections actively refused, with exception thrown |
Output adapter | Can dequeue | off | Off |
Dequeuing client | Can dequeue | Tuples silently dropped | Connections actively refused, with exception thrown |
StreamBase provides a number of ways to monitor container information, as described in Connecting to System Streams.
The StreamBase Client libraries for Java, C++, .NET, and Python support containers. You can write your own modules to connect to and interact with EventFlow engines and the containers therein.
See API Guide for more information.
The following StreamBase features are not supported with containers:
-
Multiple modules per container
-
Ad-Hoc queries