This is an automated email from the ASF dual-hosted git repository.
mimaison pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git
The following commit(s) were added to refs/heads/trunk by this push:
new 444ceeb3250 MINOR: Tidy up the Connect docs (#20531)
444ceeb3250 is described below
commit 444ceeb325056037111fa5f3e983c3301ece172c
Author: Mickael Maison <[email protected]>
AuthorDate: Thu Sep 25 09:39:37 2025 +0200
MINOR: Tidy up the Connect docs (#20531)
Remove invalid mentions of default values for group.id,
config.storage.topic, offset.storage.topic, status.storage.topic
Reviewers: Luke Chen <[email protected]>, Ken Huang <[email protected]>
---
docs/connect.html | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/docs/connect.html b/docs/connect.html
index aa3c2af3ea9..85571bf115c 100644
--- a/docs/connect.html
+++ b/docs/connect.html
@@ -47,7 +47,7 @@
<li><code>bootstrap.servers</code> - List of Kafka servers used to
bootstrap connections to Kafka</li>
<li><code>key.converter</code> - Converter class used to convert
between Kafka Connect format and the serialized form that is written to Kafka.
This controls the format of the keys in messages written to or read from Kafka,
and since this is independent of connectors it allows any connector to work
with any serialization format. Examples of common formats include JSON and
Avro.</li>
<li><code>value.converter</code> - Converter class used to convert
between Kafka Connect format and the serialized form that is written to Kafka.
This controls the format of the values in messages written to or read from
Kafka, and since this is independent of connectors it allows any connector to
work with any serialization format. Examples of common formats include JSON and
Avro.</li>
- <li><code>plugin.path</code> (default <code>empty</code>) - a list of
paths that contain Connect plugins (connectors, converters, transformations).
Before running quick starts, users must add the absolute path that contains the
example FileStreamSourceConnector and FileStreamSinkConnector packaged in
<code>connect-file-"version".jar</code>, because these connectors are not
included by default to the <code>CLASSPATH</code> or the
<code>plugin.path</code> of the Connect worker (see [...]
+ <li><code>plugin.path</code> (default <code>null</code>) - a list of
paths that contain Connect plugins (connectors, converters, transformations).
Before running quick starts, users must add the absolute path that contains the
example FileStreamSourceConnector and FileStreamSinkConnector packaged in
<code>connect-file-{{fullDotVersion}}.jar</code>, because these connectors are
not included by default to the <code>CLASSPATH</code> or the
<code>plugin.path</code> of the Connect wor [...]
</ul>
<p>The important configuration options specific to standalone mode are:</p>
@@ -57,7 +57,7 @@
<p>The parameters that are configured here are intended for producers and
consumers used by Kafka Connect to access the configuration, offset and status
topics. For configuration of the producers used by Kafka source tasks and the
consumers used by Kafka sink tasks, the same parameters can be used but need to
be prefixed with <code>producer.</code> and <code>consumer.</code>
respectively. The only Kafka client parameter that is inherited without a
prefix from the worker configuration [...]
- <p>Starting with 2.3.0, client configuration overrides can be configured
individually per connector by using the prefixes
<code>producer.override.</code> and <code>consumer.override.</code> for Kafka
sources or Kafka sinks respectively. These overrides are included with the rest
of the connector's configuration properties.</p>
+ <p>Client configuration overrides can be configured individually per
connector by using the prefixes <code>producer.override.</code> and
<code>consumer.override.</code> for Kafka sources or Kafka sinks respectively.
These overrides are included with the rest of the connector's configuration
properties.</p>
<p>The remaining parameters are connector configuration files. Each file
may either be a Java Properties file or a JSON file containing an object with
the same structure as the request body of either the <code>POST
/connectors</code> endpoint or the <code>PUT /connectors/{name}/config</code>
endpoint (see the <a href="/{{version}}/generated/connect_rest.yaml">OpenAPI
documentation</a>). You may include as many as you want, but all will execute
within the same process (on different th [...]
@@ -69,10 +69,10 @@
<p>In particular, the following configuration parameters, in addition to
the common settings mentioned above, are critical to set before starting your
cluster:</p>
<ul>
- <li><code>group.id</code> (default <code>connect-cluster</code>) -
unique name for the cluster, used in forming the Connect cluster group; note
that this <b>must not conflict</b> with consumer group IDs</li>
- <li><code>config.storage.topic</code> (default
<code>connect-configs</code>) - topic to use for storing connector and task
configurations; note that this should be a single partition, highly replicated,
compacted topic. You may need to manually create the topic to ensure the
correct configuration as auto created topics may have multiple partitions or be
automatically configured for deletion rather than compaction</li>
- <li><code>offset.storage.topic</code> (default
<code>connect-offsets</code>) - topic to use for storing offsets; this topic
should have many partitions, be replicated, and be configured for
compaction</li>
- <li><code>status.storage.topic</code> (default
<code>connect-status</code>) - topic to use for storing statuses; this topic
can have multiple partitions, and should be replicated and configured for
compaction</li>
+ <li><code>group.id</code> - Unique name for the cluster, used in
forming the Connect cluster group; note that this <b>must not conflict</b> with
consumer group IDs</li>
+ <li><code>config.storage.topic</code> - Name for the topic to use for
storing connector and task configurations; this topic should have a single
partition, be replicated, and be configured for compaction</li>
+ <li><code>offset.storage.topic</code> - Name for the topic to use for
storing offsets; this topic should have many partitions, be replicated, and be
configured for compaction</li>
+ <li><code>status.storage.topic</code> - Name for the topic to use for
storing statuses; this topic can have multiple partitions, be replicated, and
be configured for compaction</li>
</ul>
<p>Note that in distributed mode the connector configurations are not
passed on the command line. Instead, use the REST API described below to
create, modify, and destroy connectors.</p>