Contents
The TIBCO StreamBase® Operator For H2O Model Evaluator enables StreamBase applications to execute numerical models generated with the H2O Fast Scalable Machine Learning package. See http://www.h2o.ai/ and this external documentation.
H2O is distributed model training and evaluation software. With an H2O cluster it is possible to process much larger datasets than with traditional single process solutions. H2O is frequently used in Big Data solutions to process the data and build predictive models. An additional advantage of H2O is the runtime-oriented model representation (POJO/GenModel). With the H2O operator, it is possible to deploy an arbitrary number of models to be executed against incoming events. The EventFlow designer's responsibility is conversion of the incoming data into the models' expected features. It frequently requires event enrichment, such as cross-referencing, attribute lookup, or prior events related to the same context.
The operator processes input data given as a tuple or a list of tuples. The tuple schema corresponds to the input parameters of the models. For each model, the operator generates output data that matches the defined output schema. Depending on the input data, the output can be a single tuple or a list of tuples.
Dynamic model definitions allow you to provide additional metadata to the deployed models. The metadata is attached to the model result, allowing the EventFlow to take action based on the model attributes. Examples of attributes include: champion/challenger flag, category for propensity scoring, and so on.
The H2O model evaluator is implemented using the H2O library.
Model operators support an arbitrary number of models simultaneously, as well as scoring single samples and data frames.
This section describes the properties you can set for this operator, using the various tabs of the Properties view in StreamBase Studio.
Name: Use this required field to specify or change the name of this instance of this component, which must be unique in the current EventFlow module. The name must contain only alphabetic characters, numbers, and underscores, and no hyphens or other special characters. The first character must be alphabetic or an underscore.
Operator: A read-only field that shows the formal name of the operator.
Start with application: If this field is set to Yes (default) or to a module parameter that evaluates to true
, this instance of this operator starts as part of the JVM engine that runs this EventFlow module. If this field is set to
No or to a module parameter that evaluates to false
, the operator instance is loaded with the engine, but does not start until you send an sbadmin resume command, or until you start the component with StreamBase Manager.
Enable Error Output Port: Select this check box to add an Error Port to this component. In the EventFlow canvas, the Error Port shows as a red output port, always the last port for the component. See Using Error Ports to learn about Error Ports.
Description: Optionally enter text to briefly describe the component's purpose and function. In the EventFlow canvas, you can see the description by pressing Ctrl while the component's tooltip is displayed.
Property | Type | Description |
---|---|---|
Control Port | check box | Enables dynamic reconfiguration of the model list. The control port also enables the control output port which reports status of the model loading request. The control port supports all-or-nothing semantics. That is, either the full list successfully loads and replaces the currently deployed models, or it reports failure. |
Status Port | check box | Enables failure notifications. If the scoring fails, the failure is emitted to the status port, including the original input tuple. |
Timing Info | check box | Fine granular timing information. It collects the effective times of input conversion, model evaluation, and output conversion. Time is in nanoseconds. |
Log Level | INFO | Controls the level of verbosity the adapter uses to issue informational traces to the console. This setting is independent of the containing application's overall log level. Available values, in increasing order of verbosity, are: OFF, ERROR, WARN, INFO, DEBUG, TRACE. |
Property | Type | Description |
---|---|---|
Model type | enumeration | Model representation type.
|
Model URLs | name/value pairs | List of design-time specified models. The models consist of name and URL pointing to the model definition. Models can also be loaded from HDFS. |
Property | Type | Description |
---|---|---|
Output schema type | enumeration | Type of result representation.
|
Result Data Schema | schema | Anticipated schema for model output. Only fields defined in the schema are used in the output tuple. |
The Custom schema contains subset of Generic schema fields. The fields filled by operator are:
-
prediction - double value representing regression result
-
class - string value representing result class label
-
score - score of the result class
-
probability - probability of the result class
-
probabilities - list of scores for each class
-
probabilities.class - class label
-
probabilities.probability - class probability
Note
that in H2O the resulting interpretation depends on the model type.
-
regression - prediction field is filled
-
binomial classification - class and score are filled. 1/TRUE class result is passed as the score. In H2O the default discrimination threshold maximizes the F1 score.
-
multinomial classification - class and probability are filled. For each response, the class and entry in probabilities are provided. The winner class is the one with the highest probability.
Use the Concurrency tab to specify parallel regions for this instance of this component, or multiplicity options, or both. The Concurrency tab settings are described in Concurrency Options, and dispatch styles are described in Dispatch Styles.
Caution
Concurrency settings are not suitable for every application, and using these settings requires a thorough analysis of your application. For details, see Execution Order and Concurrency, which includes important guidelines for using the concurrency options.
The data port is the default input port for the model operator. It is always enabled. Use data port to execute the model scoring.
The default schema for the data input port is:
-
frame, tuple or list(tuple). Samples to be scored by the deployed models.
The tuple structure contains primitive fields (int, long, double, string or boolean) with names corresponding to model input fields.
Sparse dictionaries. A sparse dictionary is a list of tuples with name (string) or idx (int) and value (any supported primitive type) fields. Sparse dictionaries are used when 1) the fields are not known at the design time, 2) StreamBase does not support the field names, 3) the number of fields is large. The input tuple may contain any number of sparse dictionaries. For example, to provide categorical and continuous values or fields from various domains.
-
modelName (optional), string. If this field exists and is not null it will be used to specify which model the input tuple will score against. If this field is missing or null then all models are scored.
-
* arbitrary pass through parameters
Unrecognized fields are transparently passed. The frame field is not propagated; the scores field is not allowed.
The scores port provides a list of model evaluation results.
The schema for the scores output port is:
-
scores, list(tuple). List of record for each currently deployed model.
-
scores.modelName, string. Name of the model defined in the Model URLs or provided by the control port.
-
scores.modelUrl, string. URL defining the model configured in the Model URLs or provided by the control port.
-
scores.modelData, blob. Binary defining the model if used to load the model.
-
scores.score, tuple or list(tuple). The type depends on the type of frame input. List of scores in the same order as the input list. The schema is defined as Result Data Schema property.
-
scores.*. Arbitrary parameters provided during model redeployment on the control port.
-
* parameters other than frame.
The scores port transparently replicates unrecognized fields. The frame field is not propagated.
The control port enables runtime redeployment of models. The models are deployed in all-or-nothing semantics. That means if all the provided models are successfully loaded, they fully replace the current set.
The schema for the control input port is:
-
models, list(tuple). List of record for each model to be deployed.
-
models.modelName, string. Logical name of the model.
-
models.modelUrl, string. URL defining the model.
-
models.modelData, blob. The binary data of the model. This field can be used to load a model directly from any source. This field is only used if models.modelUrl is null or empty.
-
models.*. Arbitrary parameters describing the model. They are later provided in the score.
-
*. Arbitrary parameters provided during model redeployment on the control port.
The status port transparently replicates unrecognized fields; do not use the status or message fields on the input port.
The status port provides responses for runtime model deployment. The tuples are emitted only as responses to the control port tuples.
The schema for the status output port is:
-
status, string. Status of deployment. Either success or failure.
-
message, string. Descriptive status message.
-
models, list(tuple). List of record for each model to be deployed.
-
models.status, string. Status of the model loading. Either success or failure.
-
models.message, string. Descriptive model status message.
-
models.modelName, string. Logical name of the model.
-
models.modelUrl, string. URL defining the model.
-
models.modelData, blob. Binary defining the model if used to load the model.
-
models.*. Arbitrary parameters describing the model. They are later provided in the score.
-
* parameters other than models.
The status port transparently replicates unrecognized fields from the control port.