Don't miss the upcoming webinar: Building Real-Time Data Pipelines with a 3rd Generation Stream Processing Engine - sign up now!

Hazelcast Jet Connectors

Application Log Sample code on Github

Source: No Sink: Yes Batch

Logs all the data items it receives at the INFO level. The primary purpose is development when running Jet on a local machine.

Custom Connector Sample code on Github Docs

Source: Yes Sink: Yes Batch & Streaming

Jet provides a programming interface allowing you to write your own Connectors for both batch and streaming.

File watcher (Streaming) Sample code on Github Docs

Source: Yes Sink: Yes Streaming

Watches directory and streams the new lines of text coming from the files in the watched directory. The same directory has to be available to all cluster members (using shared network file system for example).

Files (Batch) Sample code on Github Docs

Source: Yes Sink: Yes Batch

Reads all the files in a local directory. The same directory has to be available to all cluster members (using shared network file system for example). The sink writes output to several files in the configured directory to avoid contention when writing by multiple parallel instances.

Hazelcast ICache (Batch) Sample code on Github Docs

Source: Yes Sink: Yes Batch

Fetches entries from a Hazelcast ICache (source). Supports a predicate and projection pushdown. The connector makes use of data locality when reading from an embedded Hazelcast IMDG. Sinks write entries to an ICache using map.put() or using an Entry Processor to update the entries in ICache instead of replacing.

Hazelcast ICache (Streaming) Sample code on Github Docs

Source: Yes Sink: Yes Streaming

Reads the change stream of a Hazelcast ICache. Supports a predicate and projection pushdown. The connector makes use of data locality when reading from an embedded Hazelcast IMDG.

Hazelcast IList Sample code on Github Docs

Source: Yes Sink: Yes Streaming

Reads items retrieved from a Hazelcast IList. All elements are read on a single member of a Jet cluster — IList isn’t partitioned. Sink adds items to the IList.

Hazelcast IMap (Batch) Sample code on Github Docs

Source: Yes Sink: Yes Batch

Fetches entries from a Hazelcast IMap (source). Supports a predicate and projection pushdown. The connector makes use of data locality when reading from an embedded Hazelcast IMDG. Sinks write entries to an IMap using map.put() or using an Entry Processor to update the entries in IMap instead of replacing.

Hazelcast IMap (Streaming) Sample code on Github Docs

Source: Yes Sink: Yes Streaming

Reads the change stream of a Hazelcast IMap. Supports a predicate and projection pushdown. The connector makes use of data locality when reading from an embedded Hazelcast IMDG.

HDFS Sample code on Github Docs

Source: Yes Sink: Yes Batch

Reads from and writes to Apache Hadoop HDFS. Reading makes use of a data locality if the Jet and Hadoop clusters are co-located.

Kafka Sample code on Github Docs

Source: Yes Sink: Yes Streaming

Consumes one or more Apache Kafka topic. Reader is based on Kafka Consumer. Jet assigns Kafka partitions evenly to the reader instances to align the parallelism of Kafka and Jet. Sink publishes messages to an Apache Kafka topic using Kafka Producer.

TCP Socket Sample code on Github Docs

Source: Yes Sink: Yes Streaming

Connects to the specified socket, reads (source) or writes (sink) lines of text received from it.

Hazelcast Jet

Main Menu