This is an automated email from the ASF dual-hosted git repository.

schofielaj pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new 188a910d159 KAFKA-20239: Document Kafka ACLS required for Connect 
(#21615)
188a910d159 is described below

commit 188a910d1592b0010bea03bbc0cf7e3f4490b05f
Author: Dale Lane <[email protected]>
AuthorDate: Wed Mar 4 14:35:56 2026 +0000

    KAFKA-20239: Document Kafka ACLS required for Connect (#21615)
    
    Adding a list of permissions needed by Connect
    
    I've tried to copy the style and wording in the ACL requirements table
    already there for Exactly-once support.
    
    https: //issues.apache.org/jira/browse/KAFKA-20239
    
    Reviewers: Andrew Schofield <[email protected]>
    
    ---------
    
    Signed-off-by: Dale Lane <[email protected]>
---
 docs/kafka-connect/user-guide.md | 362 ++++++++++++++++++---------------------
 1 file changed, 167 insertions(+), 195 deletions(-)

diff --git a/docs/kafka-connect/user-guide.md b/docs/kafka-connect/user-guide.md
index 64b6af9f78c..24c5fe71579 100644
--- a/docs/kafka-connect/user-guide.md
+++ b/docs/kafka-connect/user-guide.md
@@ -425,7 +425,7 @@ Kafka Connect is capable of providing exactly-once 
semantics for sink connectors
 
 ### Sink connectors
 
-If a sink connector supports exactly-once semantics, to enable exactly-once at 
the Connect worker level, you must ensure its consumer group is configured to 
ignore records in aborted transactions. You can do this by setting the worker 
property `consumer.isolation.level` to `read_committed` or, if running a 
version of Kafka Connect that supports it, using a connector client config 
override policy that allows the `consumer.override.isolation.level` property to 
be set to `read_committed` in [...]
+If a sink connector supports exactly-once semantics, to enable exactly-once at 
the Connect worker level, you must ensure its consumer group is configured to 
ignore records in aborted transactions. You can do this by setting the worker 
property `consumer.isolation.level` to `read_committed` or, if running a 
version of Kafka Connect that supports it, using a connector client config 
override policy that allows the `consumer.override.isolation.level` property to 
be set to `read_committed` in [...]
 
 ### Source connectors
 
@@ -435,281 +435,253 @@ If a source connector supports exactly-once semantics, 
you must configure your C
 
 For new Connect clusters, set the `exactly.once.source.support` property to 
`enabled` in the worker config for each node in the cluster. For existing 
clusters, two rolling upgrades are necessary. During the first upgrade, the 
`exactly.once.source.support` property should be set to `preparing`, and during 
the second, it should be set to `enabled`.
 
-#### ACL requirements
+## Plugin Discovery
 
-With exactly-once source support enabled, or with 
`exactly.once.source.support` set to `preparing`, the principal for each 
Connect worker will require the following ACLs:  
-  
-<table>  
-<tr>  
-<th>
+Plugin discovery is the name for the strategy which the Connect worker uses to 
find plugin classes and make them accessible to configure and run in 
connectors. This is controlled by the plugin.discovery worker configuration, 
and has a significant impact on worker startup time. `service_load` is the 
fastest strategy, but care should be taken to verify that plugins are 
compatible before setting this configuration to `service_load`.
 
-Operation
-</th>  
-<th>
+Prior to version 3.6, this strategy was not configurable, and behaved like the 
`only_scan` mode which is compatible with all plugins. For version 3.6 and 
later, this mode defaults to `hybrid_warn` which is also compatible with all 
plugins, but logs a warning for plugins which are incompatible with 
`service_load`. The `hybrid_fail` strategy stops the worker with an error if a 
plugin incompatible with `service_load` is detected, asserting that all plugins 
are compatible. Finally, the `serv [...]
 
-Resource Type
-</th>  
-<th>
+### Verifying Plugin Compatibility
 
-Resource Name
-</th>  
-<th>
+To verify if all of your plugins are compatible with `service_load`, first 
ensure that you are using version 3.6 or later of Kafka Connect. You can then 
perform one of the following checks:
 
-Note
-</th> </tr>  
-<tr>  
-<td>
+  * Start your worker with the default `hybrid_warn`strategy, and WARN logs 
enabled for the `org.apache.kafka.connect` package. At least one WARN log 
message mentioning the `plugin.discovery` configuration should be printed. This 
log message will explicitly say that all plugins are compatible, or list the 
incompatible plugins.
+  * Start your worker in a test environment with `hybrid_fail`. If all plugins 
are compatible, startup will succeed. If at least one plugin is not compatible 
the worker will fail to start up, and all incompatible plugins will be listed 
in the exception.
 
-Write
-</td>  
-<td>
 
-TransactionalId
-</td>  
-<td>
 
-`connect-cluster-${groupId}`, where `${groupId}` is the `group.id` of the 
cluster
-</td>  
-<td>
+If the verification step succeeds, then your current set of installed plugins 
is compatible, and it should be safe to change the `plugin.discovery` 
configuration to `service_load`. If the verification fails, you cannot use 
`service_load` strategy and should take note of the list of incompatible 
plugins. All plugins must be addressed before using the `service_load` 
strategy. It is recommended to perform this verification after installing or 
changing plugin versions, and the verification c [...]
 
+### Operators: Artifact Migration
 
-</td> </tr>  
-<tr>  
-<td>
+As an operator of Connect, if you discover incompatible plugins, there are 
multiple ways to resolve the incompatibility. They are listed below from most 
to least preferable.
 
-Describe
-</td>  
-<td>
+  1. Check the latest release from your plugin provider, and if it is 
compatible, upgrade.
+  2. Contact your plugin provider and request that they migrate the plugin to 
be compatible, following the source migration instructions, and then upgrade to 
the compatible version.
+  3. Migrate the plugin artifacts yourself using the included migration script.
 
-TransactionalId
-</td>  
-<td>
 
-`connect-cluster-${groupId}`, where `${groupId}` is the `group.id` of the 
cluster
-</td>  
-<td>
 
+The migration script is located in `bin/connect-plugin-path.sh` and 
`bin\windows\connect-plugin-path.bat` of your Kafka installation. The script 
can migrate incompatible plugin artifacts already installed on your Connect 
worker's `plugin.path` by adding or modifying JAR or resource files. This is 
not suitable for environments using code-signing, as this can change artifacts 
such that they will fail signature verification. View the built-in help with 
`--help`.
 
-</td> </tr>  
-<tr>  
-<td>
+To perform a migration, first use the `list` subcommand to get an overview of 
the plugins available to the script. You must tell the script where to find 
plugins, which can be done with the repeatable `--worker-config`, 
`--plugin-path`, and `--plugin-location` arguments. The script will ignore 
plugins on the classpath, so any custom plugins on your classpath should be 
moved to the plugin path in order to be used with this migration script, or 
migrated manually. Be sure to compare the out [...]
 
-IdempotentWrite
-</td>  
-<td>
+Once you see that all incompatible plugins are included in the listing, you 
can proceed to dry-run the migration with `sync-manifests --dry-run`. This will 
perform all parts of the migration, except for writing the results of the 
migration to disk. Note that the `sync-manifests` command requires all 
specified paths to be writable, and may alter the contents of the directories. 
Make a backup of your plugins in the specified paths, or copy them to a 
writable directory.
 
-Cluster
-</td>  
-<td>
+Ensure that you have a backup of your plugins and the dry-run succeeds before 
removing the `--dry-run` flag and actually running the migration. If the 
migration fails without the `--dry-run` flag, then the partially migrated 
artifacts should be discarded. The migration is idempotent, so running it 
multiple times and on already-migrated plugins is safe. After the script 
finishes, you should verify the migration is complete. The migration script is 
suitable for use in a Continuous Integrat [...]
 
-ID of the Kafka cluster that hosts the worker's config topic
-</td>  
-<td>
+### Developers: Source Migration
 
-The IdempotentWrite ACL has been deprecated as of 2.8 and will only be 
necessary for Connect clusters running on pre-2.8 Kafka clusters
-</td> </tr> </table>
+To make plugins compatible with `service_load`, it is necessary to add 
[ServiceLoader](https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html)
 manifests to your source code, which should then be packaged in the release 
artifact. Manifests are resource files in `META-INF/services/` named after 
their superclass type, and contain a list of fully-qualified subclass names, 
one on each line.
 
-And with exactly-once source enabled (but not if `exactly.once.source.support` 
is set to `preparing`), the principal for each individual connector will 
require the following ACLs:  
-  
-<table>  
-<tr>  
-<th>
+In order for a plugin to be compatible, it must appear as a line in a manifest 
corresponding to the plugin superclass it extends. If a single plugin 
implements multiple plugin interfaces, then it should appear in a manifest for 
each interface it implements. If you have no classes for a certain type of 
plugin, you do not need to include a manifest file for that type. If you have 
classes which should not be visible as plugins, they should be marked abstract. 
The following types are expecte [...]
 
-Operation
-</th>  
-<th>
+  * `org.apache.kafka.connect.sink.SinkConnector`
+  * `org.apache.kafka.connect.source.SourceConnector`
+  * `org.apache.kafka.connect.storage.Converter`
+  * `org.apache.kafka.connect.storage.HeaderConverter`
+  * `org.apache.kafka.connect.transforms.Transformation`
+  * `org.apache.kafka.connect.transforms.predicates.Predicate`
+  * `org.apache.kafka.common.config.provider.ConfigProvider`
+  * `org.apache.kafka.connect.rest.ConnectRestExtension`
+  * 
`org.apache.kafka.connect.connector.policy.ConnectorClientConfigOverridePolicy`
 
-Resource Type
-</th>  
-<th>
 
-Resource Name
-</th>  
-<th>
 
-Note
-</th> </tr>  
-<tr>  
-<td>
+For example, if you only have one connector with the fully-qualified name 
`com.example.MySinkConnector`, then only one manifest file must be added to 
resources in `META-INF/services/org.apache.kafka.connect.sink.SinkConnector`, 
and the contents should be similar to the following:
+    
+    
+    # license header or comment
+    com.example.MySinkConnector
 
-Write
-</td>  
-<td>
+You should then verify that your manifests are correct by using the 
verification steps with a pre-release artifact. If the verification succeeds, 
you can then release the plugin normally, and operators can upgrade to the 
compatible version.
 
-TransactionalId
-</td>  
-<td>
+## Security
 
-`${groupId}-${connector}-${taskId}`, for each task that the connector will 
create, where `${groupId}` is the `group.id` of the Connect cluster, 
`${connector}` is the name of the connector, and `${taskId}` is the ID of the 
task (starting from zero)
-</td>  
-<td>
+It's important to understand the security concerns inherent to Connect. First, 
Connect allows running custom plugins. These plugins can run arbitrary code, so 
you must trust them before installing them in your Connect clusters. By 
default, the REST API is unsecured and allows anyone that can access it to 
start and stop connectors. You should only directly expose the REST API to 
trusted users, otherwise it's easy to gain arbitrary code execution on Connect 
workers. By default, connectors  [...]
 
-A wildcard prefix of `${groupId}-${connector}*` can be used for convenience if 
there is no risk of conflict with other transactional IDs or if conflicts are 
acceptable to the user.
-</td> </tr>  
-<tr>  
-<td>
+### ACL requirements
 
-Describe
-</td>  
-<td>
+The principal for each Connect worker requires the following ACLs:
 
-TransactionalId
-</td>  
-<td>
+<table>
+<tr>
+<th>Operation</th>
+<th>Resource Type</th>
+<th>Resource Name</th>
+<th>Note</th>
+</tr>
+<tr><td>Read</td><td>Group</td><td>
 
-`${groupId}-${connector}-${taskId}`, for each task that the connector will 
create, where `${groupId}` is the `group.id` of the Connect cluster, 
`${connector}` is the name of the connector, and `${taskId}` is the ID of the 
task (starting from zero)
-</td>  
-<td>
+`group.id` of the Connect cluster
+</td><td> </td></tr>
+<tr><td>Read</td><td>Topic</td><td>
 
-A wildcard prefix of `${groupId}-${connector}*` can be used for convenience if 
there is no risk of conflict with other transactional IDs or if conflicts are 
acceptable to the user.
-</td> </tr>  
-<tr>  
-<td>
+`config.storage.topic` of the Connect cluster
+</td><td> </td></tr>
+<tr><td>Write</td><td>Topic</td><td>
 
-Write
-</td>  
-<td>
+`config.storage.topic` of the Connect cluster
+</td><td> </td></tr>
+<tr><td>Create</td><td>Topic</td><td>
 
-Topic
-</td>  
-<td>
+`config.storage.topic` of the Connect cluster
+</td><td>Only necessary if the config topic for Connect does not exist 
yet</td></tr>
+<tr><td>Read</td><td>Topic</td><td>
 
-Offsets topic used by the connector, which is either the value of the 
`offsets.storage.topic` property in the connector’s configuration if provided, 
or the value of the `offsets.storage.topic` property in the worker’s 
configuration if not.
-</td>  
-<td>
+`offset.storage.topic` of the Connect cluster
+</td><td> </td></tr>
+<tr><td>Write</td><td>Topic</td><td>
 
+`offset.storage.topic` of the Connect cluster
+</td><td> </td></tr>
+<tr><td>Create</td><td>Topic</td><td>
 
-</td> </tr>  
-<tr>  
-<td>
+`offset.storage.topic` of the Connect cluster
+</td><td>Only necessary if the offsets topic for Connect does not exist 
yet</td></tr>
+<tr><td>Read</td><td>Topic</td><td>
 
-Read
-</td>  
-<td>
+`status.storage.topic` of the Connect cluster
+</td><td> </td></tr>
+<tr><td>Write</td><td>Topic</td><td>
 
-Topic
-</td>  
-<td>
+`status.storage.topic` of the Connect cluster
+</td><td> </td></tr>
+<tr><td>Create</td><td>Topic</td><td>
 
-Offsets topic used by the connector, which is either the value of the 
`offsets.storage.topic` property in the connector’s configuration if provided, 
or the value of the `offsets.storage.topic` property in the worker’s 
configuration if not.
-</td>  
-<td>
+`status.storage.topic` of the Connect cluster
+</td><td>Only necessary if the status topic for Connect does not exist 
yet</td></tr>
+<tr><td>Write</td><td>TransactionalId</td><td>
 
+`connect-cluster-${groupId}`
 
-</td> </tr>  
-<tr>  
-<td>
+where `${groupId}` is the `group.id` of the cluster
+</td><td>
 
-Describe
-</td>  
-<td>
+Only necessary if [exactly-once support](#exactly-once-support) is enabled
+or if `exactly.once.source.support` is set to `preparing`.
+</td> </tr>
+<tr><td>Describe</td><td>TransactionalId</td><td>
 
-Topic
-</td>  
-<td>
+`connect-cluster-${groupId}`
 
-Offsets topic used by the connector, which is either the value of the 
`offsets.storage.topic` property in the connector’s configuration if provided, 
or the value of the `offsets.storage.topic` property in the worker’s 
configuration if not.
-</td>  
-<td>
+where `${groupId}` is the `group.id` of the cluster
+</td><td>
 
+Only necessary if [exactly-once support](#exactly-once-support) is enabled
+or if `exactly.once.source.support` is set to `preparing`.
+</td> </tr>
+<tr><td>IdempotentWrite</td><td>Cluster</td><td>ID of the Kafka cluster that 
hosts the worker's config topic</td>  <td>
 
-</td> </tr>  
-<tr>  
-<td>
+Only necessary if [exactly-once support](#exactly-once-support) is enabled
+or if `exactly.once.source.support` is set to `preparing`.
 
-Create
-</td>  
-<td>
+The IdempotentWrite ACL has been deprecated as of 2.8 and will only be 
necessary for Connect clusters running on pre-2.8 Kafka clusters
+</td> </tr>
+</table>
 
-Topic
-</td>  
-<td>
+To support Source connectors, the principal for each individual connector will 
require the following ACLs:
 
-Offsets topic used by the connector, which is either the value of the 
`offsets.storage.topic` property in the connector’s configuration if provided, 
or the value of the `offsets.storage.topic` property in the worker’s 
configuration if not.
-</td>  
-<td>
+<table>
+<tr>
+<th>Operation</th>
+<th>Resource Type</th>
+<th>Resource Name</th>
+<th>Note</th>
+</tr>
+<tr><td>Write</td><td>Topic</td><td>topic(s) used as the destination</td><td> 
</td></tr>
+<tr><td>Create</td><td>Topic</td><td>topic(s) used as the destination</td><td>
 
-Only necessary if the offsets topic for the connector does not exist yet
-</td> </tr>  
-<tr>  
+Only necessary if `topic.creation.enable` is `true` and the topic(s) do not 
exist yet
+</td></tr>
+<tr><td>Describe</td><td>Topic</td>
 <td>
 
-IdempotentWrite
-</td>  
-<td>
+Offsets topic used by the connector
 
-Cluster
-</td>  
-<td>
+This is the value of the `offsets.storage.topic` property in the connector’s 
configuration if provided,
 
-ID of the Kafka cluster that the source connector writes to
-</td>  
+or the value of the `offsets.storage.topic` property in the worker’s 
configuration if not.
+</td>
 <td>
 
-The IdempotentWrite ACL has been deprecated as of 2.8 and will only be 
necessary for Connect clusters running on pre-2.8 Kafka clusters
-</td> </tr> </table>
+Only necessary if [exactly-once support](#exactly-once-support) is enabled.
+</td> </tr>
+<tr><td>Write</td><td>TransactionalId</td><td>
 
-## Plugin Discovery
+`${groupId}-${connector}-${taskId}`
 
-Plugin discovery is the name for the strategy which the Connect worker uses to 
find plugin classes and make them accessible to configure and run in 
connectors. This is controlled by the plugin.discovery worker configuration, 
and has a significant impact on worker startup time. `service_load` is the 
fastest strategy, but care should be taken to verify that plugins are 
compatible before setting this configuration to `service_load`.
+for each task that the connector will create, where
 
-Prior to version 3.6, this strategy was not configurable, and behaved like the 
`only_scan` mode which is compatible with all plugins. For version 3.6 and 
later, this mode defaults to `hybrid_warn` which is also compatible with all 
plugins, but logs a warning for plugins which are incompatible with 
`service_load`. The `hybrid_fail` strategy stops the worker with an error if a 
plugin incompatible with `service_load` is detected, asserting that all plugins 
are compatible. Finally, the `serv [...]
+`${groupId}` is the `group.id` of the Connect cluster
 
-### Verifying Plugin Compatibility
+`${connector}` is the name of the connector
 
-To verify if all of your plugins are compatible with `service_load`, first 
ensure that you are using version 3.6 or later of Kafka Connect. You can then 
perform one of the following checks:
+`${taskId}` is the ID of the task (starting from zero)
+</td><td>
 
-  * Start your worker with the default `hybrid_warn`strategy, and WARN logs 
enabled for the `org.apache.kafka.connect` package. At least one WARN log 
message mentioning the `plugin.discovery` configuration should be printed. This 
log message will explicitly say that all plugins are compatible, or list the 
incompatible plugins.
-  * Start your worker in a test environment with `hybrid_fail`. If all plugins 
are compatible, startup will succeed. If at least one plugin is not compatible 
the worker will fail to start up, and all incompatible plugins will be listed 
in the exception.
+Only necessary if [exactly-once support](#exactly-once-support) is enabled.
 
+A wildcard prefix of `${groupId}-${connector}*` can be used for convenience if 
there is no risk of conflict with other transactional IDs or if conflicts are 
acceptable to the user.
+</td></tr>
+<tr><td>Describe</td><td>TransactionalId</td><td>
 
+`${groupId}-${connector}-${taskId}`
 
-If the verification step succeeds, then your current set of installed plugins 
is compatible, and it should be safe to change the `plugin.discovery` 
configuration to `service_load`. If the verification fails, you cannot use 
`service_load` strategy and should take note of the list of incompatible 
plugins. All plugins must be addressed before using the `service_load` 
strategy. It is recommended to perform this verification after installing or 
changing plugin versions, and the verification c [...]
+for each task that the connector will create, where
 
-### Operators: Artifact Migration
+`${groupId}` is the `group.id` of the Connect cluster
 
-As an operator of Connect, if you discover incompatible plugins, there are 
multiple ways to resolve the incompatibility. They are listed below from most 
to least preferable.
-
-  1. Check the latest release from your plugin provider, and if it is 
compatible, upgrade.
-  2. Contact your plugin provider and request that they migrate the plugin to 
be compatible, following the source migration instructions, and then upgrade to 
the compatible version.
-  3. Migrate the plugin artifacts yourself using the included migration script.
+`${connector}` is the name of the connector
 
+`${taskId}` is the ID of the task (starting from zero)
+</td>
+<td>
 
+Only necessary if [exactly-once support](#exactly-once-support) is enabled.
 
-The migration script is located in `bin/connect-plugin-path.sh` and 
`bin\windows\connect-plugin-path.bat` of your Kafka installation. The script 
can migrate incompatible plugin artifacts already installed on your Connect 
worker's `plugin.path` by adding or modifying JAR or resource files. This is 
not suitable for environments using code-signing, as this can change artifacts 
such that they will fail signature verification. View the built-in help with 
`--help`.
+A wildcard prefix of `${groupId}-${connector}*` can be used for convenience if 
there is no risk of conflict with other transactional IDs or if conflicts are 
acceptable to the user.
+</td></tr>
+<tr><td>IdempotentWrite</td><td>Cluster</td><td>ID of the Kafka cluster that 
hosts the worker's config topic</td>  <td>
 
-To perform a migration, first use the `list` subcommand to get an overview of 
the plugins available to the script. You must tell the script where to find 
plugins, which can be done with the repeatable `--worker-config`, 
`--plugin-path`, and `--plugin-location` arguments. The script will ignore 
plugins on the classpath, so any custom plugins on your classpath should be 
moved to the plugin path in order to be used with this migration script, or 
migrated manually. Be sure to compare the out [...]
+Only necessary if [exactly-once support](#exactly-once-support) is enabled.
 
-Once you see that all incompatible plugins are included in the listing, you 
can proceed to dry-run the migration with `sync-manifests --dry-run`. This will 
perform all parts of the migration, except for writing the results of the 
migration to disk. Note that the `sync-manifests` command requires all 
specified paths to be writable, and may alter the contents of the directories. 
Make a backup of your plugins in the specified paths, or copy them to a 
writable directory.
+The IdempotentWrite ACL has been deprecated as of 2.8 and will only be 
necessary for Connect clusters running on pre-2.8 Kafka clusters
+</td> </tr>
+</table>
 
-Ensure that you have a backup of your plugins and the dry-run succeeds before 
removing the `--dry-run` flag and actually running the migration. If the 
migration fails without the `--dry-run` flag, then the partially migrated 
artifacts should be discarded. The migration is idempotent, so running it 
multiple times and on already-migrated plugins is safe. After the script 
finishes, you should verify the migration is complete. The migration script is 
suitable for use in a Continuous Integrat [...]
+To support Sink connectors, the principal for each individual connector will 
require the following ACLs:
 
-### Developers: Source Migration
+<table>
+<tr>
+<th>Operation</th>
+<th>Resource Type</th>
+<th>Resource Name</th>
+<th>Note</th>
+</tr>
+<tr><td>Read</td><td>Group</td><td>
 
-To make plugins compatible with `service_load`, it is necessary to add 
[ServiceLoader](https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html)
 manifests to your source code, which should then be packaged in the release 
artifact. Manifests are resource files in `META-INF/services/` named after 
their superclass type, and contain a list of fully-qualified subclass names, 
one on each line.
+`connect-${connector}`
 
-In order for a plugin to be compatible, it must appear as a line in a manifest 
corresponding to the plugin superclass it extends. If a single plugin 
implements multiple plugin interfaces, then it should appear in a manifest for 
each interface it implements. If you have no classes for a certain type of 
plugin, you do not need to include a manifest file for that type. If you have 
classes which should not be visible as plugins, they should be marked abstract. 
The following types are expecte [...]
+where `${connector}` is
 
-  * `org.apache.kafka.connect.sink.SinkConnector`
-  * `org.apache.kafka.connect.source.SourceConnector`
-  * `org.apache.kafka.connect.storage.Converter`
-  * `org.apache.kafka.connect.storage.HeaderConverter`
-  * `org.apache.kafka.connect.transforms.Transformation`
-  * `org.apache.kafka.connect.transforms.predicates.Predicate`
-  * `org.apache.kafka.common.config.provider.ConfigProvider`
-  * `org.apache.kafka.connect.rest.ConnectRestExtension`
-  * 
`org.apache.kafka.connect.connector.policy.ConnectorClientConfigOverridePolicy`
+the name of the connector
 
+or the value of `consumer.group.id` if present in the Connect configuration,
 
+or the value of `consumer.overrides.group.id` if present in the Connector 
configuration
+</td><td> </td></tr>
+<tr><td>Read</td><td>Topic</td><td>sink topic(s) that the connector will 
consume from</td><td>
 
-For example, if you only have one connector with the fully-qualified name 
`com.example.MySinkConnector`, then only one manifest file must be added to 
resources in `META-INF/services/org.apache.kafka.connect.sink.SinkConnector`, 
and the contents should be similar to the following:
-    
-    
-    # license header or comment
-    com.example.MySinkConnector
+These will be identified by the `topics` or `topics.regex` option of the 
connector
+</td></tr>
+<tr><td>Write</td><td>Topic</td><td>
 
-You should then verify that your manifests are correct by using the 
verification steps with a pre-release artifact. If the verification succeeds, 
you can then release the plugin normally, and operators can upgrade to the 
compatible version.
+`errors.deadletterqueue.topic.name` of the connector
+</td><td>
 
-## Security
+Only necessary if `errors.deadletterqueue.topic.name` is set to a non-empty 
value
+</td></tr>
+</table>
 
-It's important to understand the security concerns inherent to Connect. First, 
Connect allows running custom plugins. These plugins can run arbitrary code, so 
you must trust them before installing them in your Connect clusters. By 
default, the REST API is unsecured and allows anyone that can access it to 
start and stop connectors. You should only directly expose the REST API to 
trusted users, otherwise it's easy to gain arbitrary code execution on Connect 
workers. By default, connectors  [...]
+Some connectors make additional use of Kafka topics that is not managed by the 
Kafka Connect framework (for example, change data capture connectors may use 
additional topics to store schema history). Refer to Connector documentation 
for details of any additional ACLs that are required. These can be added to the 
principal for the individual connector.

Reply via email to