Contents
The Spotfire Streaming Input Adapter for Google Protocol Buffers (Protobuf for short) parses the Protobuf typed byte array based on a custom formatted descriptor that you provide.
User can either provide a .desc file describing the desired message type, or by default the system will auto generate an internal descriptor based on the input tuple's schema.
This section describes the properties you can set for this adapter, using the various tabs of the Properties view in StreamBase Studio.
Name: Use this required field to specify or change the name of this instance of this component. The name must be unique within the current EventFlow module. The name can contain alphanumeric characters, underscores, and escaped special characters. Special characters can be escaped as described in Identifier Naming Rules. The first character must be alphabetic or an underscore.
Adapter: A read-only field that shows the formal name of the adapter.
Class name: Shows the fully qualified class name that implements the functionality of this adapter. If you need to reference this class name elsewhere in your application, you can right-click this field and select Copy from the context menu to place the full class name in the system clipboard.
Start options: This field provides a link to the Cluster Aware tab, where you configure the conditions under which this adapter starts.
Enable Error Output Port: Select this checkbox to add an Error Port to this component. In the EventFlow canvas, the Error Port shows as a red output port, always the last port for the component. See Using Error Ports to learn about Error Ports.
Description: Optionally, enter text to briefly describe the purpose and function of the component. In the EventFlow Editor canvas, you can see the description by pressing Ctrl while the component's tooltip is displayed.
Property | Type | Description |
---|---|---|
File Name | resource chooser | A customized Google protobuf file in .desc style. |
Message Index | int | Each descriptor file could contain more than one message type. This index specifies which message type will be used. If no file provided above, this is disabled. |
Protobuf Field Name | string | The corresponding Protobuf type byte array's field name.(optional) |
Timestamp Format | string | Determines timestamp format of StreamBase data type Timestamp. |
Enable Pass Through Fields | check box | Enable passing through all fields together with output. |
Pass Through Fields Field Name | string | Name for the pass through fields. |
Log Level | INFO | Controls the level of verbosity the adapter uses to issue informational traces to the console. This setting is independent of the containing application's overall log level. Available values, in increasing order of verbosity, are: OFF, ERROR, WARN, INFO, DEBUG, TRACE. |
Property | Type | Description |
---|---|---|
Proto Output Schema | schema | The schema used to convert a Protobuf type byte array into tuples with this schema. |
Use the settings in this tab to enable this operator or adapter for runtime start and stop conditions in a multi-node cluster. During initial development of the fragment that contains this operator or adapter, and for maximum compatibility with releases before 10.5.0, leave the Cluster start policy control in its default setting, Start with module.
Cluster awareness is an advanced topic that requires an understanding of StreamBase Runtime architecture features, including clusters, quorums, availability zones, and partitions. See Cluster Awareness Tab Settings on the Using Cluster Awareness page for instructions on configuring this tab.
Use the Concurrency tab to specify parallel regions for this instance of this component, or multiplicity options, or both. The Concurrency tab settings are described in Concurrency Options, and dispatch styles are described in Dispatch Styles.
Caution
Concurrency settings are not suitable for every application, and using these settings requires a thorough analysis of your application. For details, see Execution Order and Concurrency, which includes important guidelines for using the concurrency options.
The data port is the default input port for the Protobuf2Tuple adapter, and is always enabled. Use the data port to convert tuple data to the Protobuf type byte array.
The data type comparison table:
StreamBase Data Type | Protobuf StreamBase Data Type |
---|---|
STRING | STRING |
DOUBLE | DOUBLE/FLOAT |
BOOL | BOOLEAN/STRING |
LONG | INT64 |
DOUBLE | INT64 |
TUPLE | MESSAGE |
INT | INT32 |
BLOB | BYTE_STRING |
TIMESTAMP | LONG/STRING |
-
With descriptor file: the output tuple's schema will be generated based on the provided file.
-
Without descriptor file: an internal descriptor will be generated based on the input data's schema.
-
Pass Through Fields: if enabled, all data will be passed together with the output tuple.
When you want to customize the output schema for the message, you must provide a descriptor file that compiles from a Google Protobuf proto (both proto2 and proto3 are supported).
You can generate the descriptor file from the command line:
protoc protoFile.proto --descriptor_set_out=descFile.desc
Below is a proto file example:
syntax = "proto3"; import "google/protobuf/descriptor.proto"; message Input { string name = 1; int32 id = 2; // Unique ID number for this person. Address add = 3; repeated PhoneNumber phones = 4; //List message PhoneNumber { string number = 1; } message Address { string zip = 1; string street = 2; } }