Bahdan Siamionau created KAFKA-7242:
---------------------------------------

             Summary: Externalized secrets are revealed in task configuration
                 Key: KAFKA-7242
                 URL: https://issues.apache.org/jira/browse/KAFKA-7242
             Project: Kafka
          Issue Type: Bug
            Reporter: Bahdan Siamionau


Trying to use new [externalized 
secrets|https://issues.apache.org/jira/browse/KAFKA-6886] feature I noticed 
that task configuration is being saved in config topic with disclosed secrets. 
It seems like the main goal of feature was not achieved - secrets are still 
persisted in plain-text. Probably I'm misusing this new config, please correct 
me if I wrong.

I'm running connect in distributed mode, creating connector with following 
config:
{code:java}
{
  "name" : "jdbc-sink-test",
  "config" : {
    "connector.class" : "io.confluent.connect.jdbc.JdbcSinkConnector",
    "tasks.max" : "1",
    "config.providers" : "file",
    "config.providers.file.class" : 
"org.apache.kafka.common.config.provider.FileConfigProvider",
    "config.providers.file.param.secrets" : "/opt/mysecrets",
    "topics" : "test_topic",
    "connection.url" : "${file:/opt/mysecrets:url}",
    "connection.user" : "${file:/opt/mysecrets:user}",
    "connection.password" : "${file:/opt/mysecrets:password}",
    "insert.mode" : "upsert",
    "pk.mode" : "record_value",
    "pk.field" : "id"
  }
}
{code}
Connector works fine, placeholders are substituted with correct values from 
file, but then updated config is written into  the topic again (see 3 following 
records in config topic):
{code:java}
key: connector-jdbc-sink-test
value:
{
"properties": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"config.providers": "file",
"config.providers.file.class": 
"org.apache.kafka.common.config.provider.FileConfigProvider",
"config.providers.file.param.secrets": "/opt/mysecrets",
"topics": "test_topic",
"connection.url": "${file:/opt/mysecrets:url}",
"connection.user": "${file:/opt/mysecrets:user}",
"connection.password": "${file:/opt/mysecrets:password}",
"insert.mode": "upsert",
"pk.mode": "record_value",
"pk.field": "id",
"name": "jdbc-sink-test"
}
}


key: task-jdbc-sink-test-0
value:
{
"properties": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"config.providers.file.param.secrets": "/opt/mysecrets",
"connection.password": "actualpassword",
"tasks.max": "1",
"topics": "test_topic",
"config.providers": "file",
"pk.field": "id",
"task.class": "io.confluent.connect.jdbc.sink.JdbcSinkTask",
"connection.user": "datawarehouse",
"name": "jdbc-sink-test",
"config.providers.file.class": 
"org.apache.kafka.common.config.provider.FileConfigProvider",
"connection.url": 
"jdbc:postgresql://actualurl:5432/datawarehouse?stringtype=unspecified",
"insert.mode": "upsert",
"pk.mode": "record_value"
}
}

key: commit-jdbc-sink-test
value:
{
"tasks":1
}
{code}
Please advice have I misunderstood the goal of the given feature, have I missed 
smth in configuration or is it actually a bug? Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to