Contents
The Spotfire Streaming XML File Writer for Apache Hadoop Distributed File System (HDFS) is an embedded adapter suitable for streaming tuples to a text file formatted with XML tags.
The XML file created by this adapter does not contain a root node. Each tuple streamed by the adapter should be treated as an individual document. Input fields of type list or tuple are not supported.
Each field value can be optionally parsed for XML reserved characters such as <, >, and & in field values and XML-escaped if detected. Parsing for XML escaping incurs a performance penalty; disable this feature (the default) if you are sure your input data contains no XML reserved characters.
The adapter rolls over its output file when it reaches the size specified in the
(optional) Max File Size parameter. When an output file
is rolled over, the current File Name is appended with
the current date, formatted as: yyyyMMddhhmmss
. If an output file was already
created with a particular date suffix, then subsequent log files are further appended
with the suffix -0
, -1
,
-2
, and so on.
The adapter flushes writes to disk after every tuple unless Flush Interval is specified. If Flush
Interval is specified, then a flush()
is done
asynchronously. Each flush and write to the output file is synchronized to prevent
partial tuples from being written to disk.
Note
When using the HDFS core libraries on Windows, you may see a console message
similar to Failed to locate the winutils binary in the hadoop
binary path
. The Apache Hadoop project acknowledges
that this is an error to be fixed in a future Hadoop release. In the meantime,
you can either ignore the error message as benign, or you can download, build, and
install the winutils executable as discussed on numerous Internet resources.
The HDFS XML File Writer adapter is configured with the properties shown in the following sections. In certain cases, properties can also be set in the StreamBase Server configuration file.
Name: Use this required field to specify or change the name of this instance of this component. The name must be unique within the current EventFlow module. The name can contain alphanumeric characters, underscores, and escaped special characters. Special characters can be escaped as described in Identifier Naming Rules. The first character must be alphabetic or an underscore.
Adapter: A read-only field that shows the formal name of the adapter.
Class name: Shows the fully qualified class name that implements the functionality of this adapter. If you need to reference this class name elsewhere in your application, you can right-click this field and select Copy from the context menu to place the full class name in the system clipboard.
Start options: This field provides a link to the Cluster Aware tab, where you configure the conditions under which this adapter starts.
Enable Error Output Port: Select this checkbox to add an Error Port to this component. In the EventFlow canvas, the Error Port shows as a red output port, always the last port for the component. See Using Error Ports to learn about Error Ports.
Description: Optionally, enter text to briefly describe the purpose and function of the component. In the EventFlow Editor canvas, you can see the description by pressing Ctrl while the component's tooltip is displayed.
Property | Description | Default |
---|---|---|
File Name (string) | The local file path. | None |
User (string) | The user to use for HDFS file access. An empty value means use the current system user. | None |
Use Default Charset (optional bool) | Specifies whether the default Java platform character set should be used. If this checkbox is cleared, a valid character set name must be specified for the Character Set property. | cleared |
Character Set (optional string) |
The character set for the XML file you are writing.
NoteIf you specify a non-ASCII character set such as UTF-8, remember that character mappings are not necessarily one byte to one byte in non-ASCII character sets. |
The setting of the system property streambase.tuple-charset . US-ASCII if the property is not set.
|
Truncate File (bool) | If true, truncate log on startup. | false, append |
Max File Size (optional int) | The maximum XML file size in bytes before roll; 0 = no roll. | 0, unlimited |
Buffer Size (optional int) | The buffer size, which must be greater than 0; 0 = no buffer. | 0, unlimited |
Flush Interval (optional int) | The frequency with which to force flush, in milliseconds; 0 = always flush. | 0, write immediately |
Escape (check box) |
If selected, characters in input tuples such as <, >, and & are
XML encoded with < > &
respectively. If cleared, such characters are embedded in the resulting XML
verbatim.
|
cleared |
Log Level | Controls the level of verbosity the adapter uses to send notifications to the console. This setting can be higher than the containing application's log level. If set lower, the system log level will be used. Available values, in increasing order of verbosity, are: OFF, ERROR, WARN, INFO, DEBUG, TRACE. | INFO |
Property | Data Type | Description |
---|---|---|
Buffer Size (Bytes) | Int | The size of the buffer to be used. If empty, the default is used. |
Replication | Short | The required block replication for the file. If empty, the server default is used, and only used during file creation. |
Block Size (Bytes) | Long | The default data block size. If empty, the server default is used, and only used during file creation. |
Configuration Files | String |
The HDFS configuration files to use when connecting to the HDFS file
system. For example, use the standard file, core-defaults.xml .
|
Use the settings in this tab to enable this operator or adapter for runtime start and stop conditions in a multi-node cluster. During initial development of the fragment that contains this operator or adapter, and for maximum compatibility with releases before 10.5.0, leave the Cluster start policy control in its default setting, Start with module.
Cluster awareness is an advanced topic that requires an understanding of StreamBase Runtime architecture features, including clusters, quorums, availability zones, and partitions. See Cluster Awareness Tab Settings on the Using Cluster Awareness page for instructions on configuring this tab.
Use the Concurrency tab to specify parallel regions for this instance of this component, or multiplicity options, or both. The Concurrency tab settings are described in Concurrency Options, and dispatch styles are described in Dispatch Styles.
Caution
Concurrency settings are not suitable for every application, and using these settings requires a thorough analysis of your application. For details, see Execution Order and Concurrency, which includes important guidelines for using the concurrency options.
The XML file emitted from this adapter takes the following form for each input tuple:
<tuple> <FieldName1>FieldValue1</FieldName1> <FieldName2>FieldValue2</FieldName2> <FieldName3>FieldValue3</FieldName3> ... </tuple>
Typechecking fails if the input tuple includes fields of type list or tuple, and if the following optional parameters are specified and do not meet the following conditions:
-
Max File Size must be greater than or equal to 0.
-
Flush Interval must be greater than or equal to 0.
-
Buffer Size must be greater than or equal to 0.
If specified and those conditions are not met, typechecking fails.
On suspend, all tuples are written to disk and the currently open XML file is closed.
On resumption, caching is restarted. The first log message occurs at resume time
+ Flush
Interval
.