DestinationsStreamingKafka

Apache Kafka

Publish messages to Apache Kafka topics for event streaming and data pipelines. Use SignalSmith to stream model results, audience membership changes, or enriched records into Kafka for real-time downstream processing.

Prerequisites

  • An Apache Kafka cluster (self-managed or a hosted service like Confluent Cloud, Amazon MSK, or Aiven)
  • A Kafka topic to publish messages to
  • Network connectivity between SignalSmith and the Kafka bootstrap servers
  • SASL credentials (if authentication is enabled)

Authentication

Kafka supports three authentication methods:

SASL/PLAIN

  1. Enter the Username and Password for SASL/PLAIN authentication in SignalSmith

SASL/SCRAM

  1. Enter the Username and Password for SASL/SCRAM-SHA-256 or SCRAM-SHA-512 authentication in SignalSmith

No Authentication

  1. Select No Authentication for development clusters or internal clusters that do not require credentials

Configuration

FieldTypeRequiredDescription
Bootstrap ServersTextYesComma-separated list of Kafka broker addresses in host:port format (e.g., broker1:9092,broker2:9092)
Use TLSToggleNoEnable TLS encryption for broker connections. Default: On

Target Settings

FieldTypeRequiredDescription
TopicTextYesThe Kafka topic to publish messages to (e.g., my-events-topic)
Partition Key FieldTextNoRow field to use as the Kafka message key for partitioning. Defaults to the primary key if not specified

Supported Operations

Sync Modes: Insert

Audience Sync Modes: Add

Features

  • Field Mapping: No — Kafka publishes the full record as a JSON message
  • Schema Introspection: No — Kafka topics are schemaless from the producer perspective

How It Works

SignalSmith publishes each row as a JSON message to the configured Kafka topic:

  1. Each row is serialized as a JSON object
  2. The message key is set from the configured partition key field (or the primary key)
  3. Messages are published to the Kafka topic with at-least-once delivery semantics
  4. Kafka handles partitioning based on the message key

The partition key field determines which partition receives each message. Records with the same key are always sent to the same partition, preserving ordering per key.

Troubleshooting

Connection failed

Verify the bootstrap servers are correct and reachable. Ensure all listed brokers are running. For cloud-hosted Kafka, check that the connection endpoint is the external/public endpoint, not the internal one.

Authentication failed

For SASL/PLAIN and SASL/SCRAM, verify the username and password. Ensure the authentication mechanism matches what the Kafka cluster expects. Confluent Cloud typically uses SASL/PLAIN; Amazon MSK may use SASL/SCRAM.

Topic not found

Kafka can auto-create topics if auto.create.topics.enable is set on the broker. If auto-creation is disabled, create the topic manually before syncing.

TLS handshake failed

If the cluster requires TLS, enable the Use TLS toggle. For self-signed certificates, you may need to contact support for custom CA configuration.

Messages not appearing in topic

Verify the topic name is correct (topic names are case-sensitive). Check consumer group offsets to ensure consumers are reading from the correct offset.