1. Essentially, I think removing the option to configure properties via file 
may be a big change for everyone. Having said that, your points are very valid. 
I guess we can discuss this a bit more during the KIP hangout. 

4. Yes, we will need to make some changes to update the MetricConfig for any 
metric. I left it out because I felt it wasn't strictly related to the KIP. 

Thanks for the nice summary on the implementation breakdown. Basically, the KIP 
should provide a uniform mechanism to change any type of config dynamically but 
the work to actually convert configs can be out of scope.  

Thanks,
Aditya

________________________________________
From: Jay Kreps [jay.kr...@gmail.com]
Sent: Monday, May 04, 2015 2:00 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-21 Configuration Management

Hey Aditya,

1. I would argue for either staying with what we have or else moving to a
better solution, but not doing both. A solution that uses both is going to
be quite complex to figure out what is configured and where it comes from.
If you think this is needed let's try to construct the argument for why it
is needed. Like in the workflow you described think how confusing that will
be--the first time the broker starts it uses the file, but then after that
if you change the file nothing happens because it has copied the file into
ZK. Instead let's do the research to figure out why people would object to
a pure-zk solution and then see if we can't address all those concerns.

4. I think it is fine to implement this in a second phase, but I think you
will need it even to be able to update the MetricConfigs to execute the
quota change, right?

Not sure if I follow your question on implementation. I think what you are
saying might be doing something like this as a first pass
a. Generalize the TopicConfigManager to handle any type of config override
as described in the doc (broker, topic, client, etc)
b. Implement the concept of mutable configs in ConfigDef
c. Add the client-level overrides for quotas and make sure the topic
overrides all still work
d. Not actually do the work to make most of the broker configs mutable
since that will involve passing around the KafkaConfiguration object much
more broadly in places where we have static wiring now. This would be done
as-needed as we made more configs dynamic.

Is that right?

Personally I think that makes sense. The only concern is just that we get
to a good stopping point so that people aren't left in some half-way state
that we end up redoing in the next release. So I think getting to a final
state with the configuration "infrastructure" is important, but actually
making all the configs mutable can be done gradually.

-Jay

On Mon, May 4, 2015 at 1:31 PM, Aditya Auradkar <
aaurad...@linkedin.com.invalid> wrote:

> Hey Jay,
>
> Thanks for the feedback.
>
> 1. We can certainly discuss what it means to remove the file configuration
> as a thought exercise. However, is this something we want to do for real?
> IMO, we can remove file configuration by having all configs stored in
> zookeeper. The flow can be:
> - Broker starts and reads all the configs from ZK. (overrides)
> - Apply them on top of the defaults that are hardcoded within the broker.
> This should simulate file based config behavior as it is currently.
> - Potentially, we can write back all the merged configs to zookeeper
> (defaults + overrides). This means that the entire config of that broker is
> in ZK.
>
> Thoughts?
>
> 2. Good point. All overridden configs (topic, client level) will have a
> corresponding broker config that serves as a default. It should be
> sufficient to change that broker config dynamically and that effectively
> means that the default has been changed. The overrides on a per
> topic/client basis still take precedence. So yeah, I don't think we need to
> model defaults explicitly. Using an example to be sure we are on the same
> page, lets say we wanted to increase the log retention time for all topics
> without having to create a  separate override for each topic, we could
> simply change the "log.retention.time" under "/brokers/<broker_id>" to the
> desired value and that should change the default log retention for everyone
> (apart from the explicitly overridden ones on a per-topic basis).
>
> 3. I thought it was cleaner to simply separate the producer and consumer
> configs but I guess if they present the same clientId, they are essentially
> the "same" client. I'll follow your suggestion.
>
> 4. Interesting that you mention this. I actually thought about having
> callbacks but I left it out of the initial proposal since I wanted to keep
> it relatively simple. The only configs we can change by checking references
> are the ones we check frequently while processing requests or (or something
> periodic). I shall incorporate this on the KIP.
>
> 5. What you are proposing sounds good. Initially, I was planning to push
> down everything to KafkaConfig by not having immutable vals within. Having
> a wrapper (KafkaConfiguration) like you suggest is probably cleaner.
>
> One implementation detail. There don't appear to be any concerns wrt the
> client based config section (and the topic config already exists). Are
> there any concerns if we keep implementation of the per-client config piece
> and generalizing the code in TopicConfigManager separate from the broker
> config section? Client configs are an immediate requirement to
> operationalize quotas (perhaps can be used to manage authorization also for
> security). The broker side changes to mark configs dynamic, implement
> callbacks etc.. can be implemented as a followup task since it will take
> longer to identity which configs can be made dynamic and actually doing the
> work to make them so. I think that once we have reasonable agreement on the
> overall picture, we can implement these things piece by piece.
>
> Thanks,
> Aditya
>
> ________________________________________
> From: Jay Kreps [jay.kr...@gmail.com]
> Sent: Friday, May 01, 2015 12:53 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-21 Configuration Management
>
> Hey Aditya,
>
> This is a great! A couple of comments:
>
> 1. Leaving the file config in place is definitely the least disturbance.
> But let's really think about getting rid of the files and just have one
> config mechanism. There is always a tendency to make everything pluggable
> which so often just leads to two mediocre solutions. Can we do the exercise
> of trying to consider fully getting rid of file config and seeing what goes
> wrong?
>
> 2. Do we need to model defaults? The current approach is that if you have a
> global config x it is overridden for a topic xyz by /topics/xyz/x, and I
> think this could be extended to /brokers/0/x. I think this is simpler. We
> need to specify the precedence for these overrides, e.g. if you override at
> the broker and topic level I think the topic level takes precedence.
>
> 3. I recommend we have the producer and consumer config just be an override
> under client.id. The override is by client id and we can have separate
> properties for controlling quotas for producers and consumers.
>
> 4. Some configs can be changed just by updating the reference, others may
> require some action. An example of this is if you want to disable log
> compaction (assuming we wanted to make that dynamic) we need to call
> shutdown() on the cleaner. I think it may be required to register a
> listener callback that gets called when the config changes.
>
> 5. For handling the reference can you explain your plan a bit? Currently we
> have an immutable KafkaConfig object with a bunch of vals. That or
> individual values in there get injected all over the code base. I was
> thinking something like this:
> a. We retain the KafkaConfig object as an immutable object just as today.
> b. It is no longer legit to grab values out fo that config if they are
> changeable.
> c. Instead of making KafkaConfig itself mutable we make KafkaConfiguration
> which has a single volatile reference to the current KafkaConfig.
> KafkaConfiguration is what gets passed into various components. So to
> access a config you do something like config.instance.myValue. When the
> config changes the config manager updates this reference.
> d. The KafkaConfiguration is the thing that allows doing the
> configuration.onChange("my.config", callback)
>
> -Jay
>
> On Tue, Apr 28, 2015 at 3:57 PM, Aditya Auradkar <
> aaurad...@linkedin.com.invalid> wrote:
>
> > Hey everyone,
> >
> > Wrote up a KIP to update topic, client and broker configs dynamically via
> > Zookeeper.
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration
> >
> > Please read and provide feedback.
> >
> > Thanks,
> > Aditya
> >
> > PS: I've intentionally kept this discussion separate from KIP-5 since I'm
> > not sure if that is actively being worked on and I wanted to start with a
> > clean slate.
> >
>

Reply via email to