Kafka
If you’re new to Unstructured, read this note first.
Before you can create a destination connector, you must first sign in to your Unstructured account:
- If you do not already have an Unstructured account, go to https://unstructured.io/contact and fill out the online form to indicate your interest.
- If you already have an Unstructured account, go to https://platform.unstructured.io and sign in by using the email address, Google account, or GitHub account that is associated with your Unstructured account.
After you sign in, the Unstructured user interface (UI) appears, which you use to get your Unstructured API key. To learn how, watch this 40-second how-to video.
After you create the destination connector, add it along with a source connector to a workflow. Then run the worklow as a job. To learn how, try out the hands-on Workflow Endpoint quickstart, go directly to the quickstart notebook, or watch the two 4-minute video tutorials for the Unstructured Python SDK.
You can also create destination connectors with the Unstructured user interface (UI). Learn how.
If you need help, reach out to the community on Slack, or contact us directly.
You are now ready to start creating a destination connector! Keep reading to learn how.
Send processed data from Unstructured to Kafka.
The requirements are as follows.
-
A Kafka cluster in Confluent Cloud. (Create a cluster.)
The following video shows how to set up a Kafka cluster in Confluent Cloud:
-
The hostname and port number of the bootstrap Kafka cluster to connect to..
-
The name of the topic to read messages from or write messages to on the cluster. Create a topic. Access available topics.
-
For authentication, an API key and secret.
To create a Kafka destination connector, see the following examples.
Replace the preceding placeholders as follows:
<name>
(required) - A unique name for this connector.<bootstrap-server>
- The hostname of the bootstrap Kafka cluster to connect to.<port>
- The port number of the bootstrap Kafka cluster to connect to. The default is9092
if not otherwise specified.<group-id>
- The ID of the consumer group. A consumer group is a way to allow a pool of consumers to divide the consumption of data over topics and partitions. The default isdefault_group_id
if not otherwise specified.<kafka-api-key>
- For authentication, the API key for access to the cluster.<secret>
- For authentication, the secret for access to the cluster.<topic>
- The name of the topic to read messages from or write messages to on the cluster.<batch-size>
(destination connector only) - The maximum number of messages to send in a single batch. The default is100
if not otherwise specified.<num-messages-to-consume>
(source connector only) - The maximum number of messages that the consumer will try to consume. The default is100
if not otherwise specified.