[ https://issues.apache.org/jira/browse/KAFKA-14871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dave Sloan resolved KAFKA-14871. -------------------------------- Resolution: Abandoned After discussing with my colleagues, we have come to the conclusion that although the behaviour is incorrect there's not actually a good reason for defining the secret providers inside the connector configuration. For security reasons it is better to define inside the environment (worker properties). > Kafka Connect - TTL not respected for Config Provider-provided connector > configurations > --------------------------------------------------------------------------------------- > > Key: KAFKA-14871 > URL: https://issues.apache.org/jira/browse/KAFKA-14871 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect > Affects Versions: 3.3.2 > Reporter: Dave Sloan > Priority: Major > > When defining a configuration provider using environment variables (eg via > Docker), then a reload is scheduled according to the TTL of the returned > configuration. > > Here is an example: > > |Environment Variable|Value| > |CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_VAULT_ENGINE_VERSION|2| > |CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_VAULT_TOKEN|9c08104f-98b7-4bce-ab86-4a6c63897ec4| > |CONNECT_CONFIG_PROVIDERS|vault| > |CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_DEFAULT_TTL|30000| > |CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_VAULT_ADDR|http://vault:8200| > |CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_VAULT_AUTH_METHOD|token| > |CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_FILE_WRITE|false| > |CONNECT_CONFIG_PROVIDERS_VAULT_CLASS|io.lenses.connect.secrets.providers.VaultSecretProvider| > > > {code:java} > { > "name": "testSink", > "config": { > "topics": "vaultTest", > "name": "testSink", > "key.converter": "org.apache.kafka.connect.storage.StringConverter", > "test.sink.secret.value": > "${vault:rotate-test/myVaultSecretPath:myVaultSecretKey}", > "value.converter": "org.apache.kafka.connect.storage.StringConverter", > "tasks.max": 1, > } > }{code} > And this is what we see in the logs: > {code:java} > STDOUT: [2023-03-30 07:46:43,204] INFO Scheduling a restart of connector > testSink in 4908 ms > (org.apache.kafka.connect.runtime.WorkerConfigTransformer){code} > However when we try and achieve the same via connector properties, no restart > is scheduled > > > > {code:java} > { > "name": "testSink", > "config": { > "topics": "vaultTest", > "name": "testSink", > "key.converter": "org.apache.kafka.connect.storage.StringConverter", > "value.converter": "org.apache.kafka.connect.storage.StringConverter", > "tasks.max": 1, > "connector.class": "io.lenses.connect.secrets.test.TestSinkConnector", > "test.sink.secret.value": > "${vault:rotate-test/myVaultSecretPath:myVaultSecretKey}", > "config.providers": "vault", > "config.providers.vault.class": > "io.lenses.connect.secrets.providers.VaultSecretProvider", > "config.providers.vault.param.vault.engineversion": 2, > "config.providers.vault.param.vault.token": > "9c08104f-98b7-4bce-ab86-4a6c63897ec4", > "config.providers.vault.param.default.ttl": 30000, > "config.providers.vault.param.vault.addr": "http://vault:8200", > "config.providers.vault.param.vault.auth.method": "token", > "config.providers.vault.param.file.write": false > } > }{code} > > > Upon looking deeper into the code I can see that on line 239 of AbstractConfig > [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java#L539] > > {code:java} > ConfigTransformerResult result = > configTransformer.transform(indirectVariables);{code} > The result contains the TTLs. However these are not used. > > Expectation: > TTLs should be used to schedule a restart of the connector so that the > behaviour is the same as if using environment properties. > -- This message was sent by Atlassian Jira (v8.20.10#820010)