@Ted: The issue is kinda hard to reproduce. It's just something we observe over time.
@Ewen: I agree. Opt-in seems to be a good solution to me. To your question, if there is no ConfDef that defines which fields are Passwords we can just return the config as is. There is a PR for this KIP already. Comments/Discussions are welcome. https://github.com/apache/kafka/pull/4269 On Tue, Jan 2, 2018 at 8:52 PM, Ewen Cheslack-Postava <e...@confluent.io> wrote: > Vincent, > > Thanks for the KIP. This is definitely an issue we know is a problem for > some users. > > I think the major problem with the KIP as-is is that it makes it impossible > to get the original value back out of the API. This KIP probably ties in > significantly with ideas for securing the REST API (SSL) and adding ACLs to > it. Both are things we know people want, but haven't happened yet. However, > it also interacts with other approaches to adding those features, e.g. > layering proxies on top of the existing API (e.g. nginx, apache, etc). Just > doing a blanket replacement of password values with a constant would likely > break things for people who secure things via a proxy (and may just not > allow reads of configs unless the user is authorized for the particular > connector). These are the types of concerns we like to think through in the > compatibility section. One option to get the masking functionality in > without depending on a bunch of other security improvements might be to > make this configurable so users that need this (and can forgo seeing a > valid config via the API) can opt-in. > > Regarding your individual points: > > * I don't think the particular value for the masked content matters much. > Any constant indicating a password field is good. Your value seems fine to > me. > * I don't think ConnectorInfo has enough info on its own to do proper > masking. In fact, I think you need to parse the config enough to get the > Connector-specific ConfigDef out in order to determine which fields are > Password fields. I would probably try to push this to be as central as > possible, maybe adding a method to AbstractHerder that can get configs with > a boolean indicating whether they need to have sensitive fields removed. > That method could deal with parsing the config to get the right connector, > getting the connector config, and then sanitizing any configs that are > sensitive. We could have this in one location, then have the relevant REST > APIs just use the right flag to determine if they get sanitized or > unsanitized data. > > That second point raises another interesting point -- what happens if the > connector configuration references a connector which the worker serving the > REST request *does not know about*? In that case, there will be no > corresponding ConfigDef that defines which fields are Passwords and need to > be sensitized. Does it return an error? Or just return the config as is? > > -Ewen > > On Thu, Dec 28, 2017 at 3:34 AM, Ted Yu <yuzhih...@gmail.com> wrote: > > > For the last point you raised, can you come up with a unit test that > shows > > what you observed ? > > > > Cheers > > > > On Mon, Dec 18, 2017 at 11:14 AM, Vincent Meng <vm...@zefr.com> wrote: > > > > > Hi all, > > > > > > I've created KIP-242, a proposal to secure credentials in kafka connect > > > rest endpoint. > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP- > > > 242%3A+Mask+password+in+Kafka+Connect+Rest+API+response > > > > > > Here are something I'd like to discuss: > > > > > > - The "masked" value is set to "*********" (9 stars) currently. It's > > an > > > arbitrary value I picked. Are there any better options? > > > - The proposal change is in the > > > *org.apache.kafka.connect.runtime.rest.resources. > ConnectorsResource* > > > class, where before the response is returned we go through config > and > > > mask > > > the password. This has been proven to work. However I think it's > > > cleaner if > > > we do the masking in > > > *org.apache.kafka.connect.runtime.rest.entities.ConnectorInfo* > where > > > config() method can return the masked config, so that we don't have > to > > > mask > > > the value in each endpoint (and new endpoints if added in the > > future). I > > > ran into some issue with this. So after a while, I start seeing > > > incorrect > > > password being used for the connector. My conjecture is that the > value > > > stored in kafka has been changed to the mask value. Can someone > > confirm > > > this might happen with kafka connect? Feel like > > *ConnectorInfo.Config()* > > > is used somewhere to update connect config storage topic. > > > > > > If there's any comments on the KIP let me know. Thank you very much. > > > > > > -Vincent > > > > > >