Hi Lianet,
Yes, you are correct. The wording is a bit clumsy. I'll fix it.

Thanks,
Andrew

On 2026/02/02 14:07:29 Lianet Magrans wrote:
> Hi Andrew, one more question after a final pass to the KIP:
> 
> About the delivery attempts configured per group, the KIP states that "it's
> possible that the limit has been exceeded by one" when describing the case
> of delivery.attempts reduced for a group when records are in-flight. But I
> guess that even if we may allow just one single final delivery attempt in
> this case, the limit could end up being exceeded by more than one, right?
> (depending on how much the limit is reduced from its current value, e.g
> reduced from default 5 to 1). The main point I want to confirm is that
> we'll allow changing the delivery count per group to **any** value within
> the broker limits (regardless of the value of the delivery before the
> change). Is my understanding correct?
> 
> Thanks!
> 
> On Fri, Jan 30, 2026 at 7:12 AM Lianet Magrans <[email protected]> wrote:
> 
> > Thanks for the answer Andrew! Agreed, logging the config inconsistency of
> > individual groups should be enough since in the end the intention is to
> > apply the new config boundaries generally (broker level).
> >
> > Thanks!
> > Lianet
> >
> > On Tue, Jan 27, 2026 at 4:45 AM Andrew Schofield <[email protected]>
> > wrote:
> >
> >> Hi Lianet,
> >> Thanks for reviewing the KIP.
> >>
> >> I want to avoid having to cascade changes to group configs when the
> >> broker config is changed. Instead, the group config is applied, the value
> >> is capped using the broker configs. We can log situations in which the
> >> group config is outside the bounds specified by the broker configs to
> >> help the administrator, but I feel that this is unlikely to be a common
> >> situation in practice.
> >>
> >> Thanks,
> >> Andrew
> >>
> >> On 2026/01/27 03:33:39 Lianet Magrans wrote:
> >> > Thanks for the KIP Andrew! Nice alignment on configs
> >> validation/enforcement.
> >> >
> >> > Just one comment regarding the behaviour on broker configs updates,
> >> > this: "*When
> >> > a broker configuration is updated, the existing group configurations are
> >> > not validated. This ensures that the administrator is able to tighten or
> >> > relax limits easily*."
> >> > I agree with the approach, seems sensible to allow broker-level configs
> >> to
> >> > go in without getting blocked on the lower-priority group-level configs.
> >> > But this allows an inconsistent state (group configs out of bound) that
> >> the
> >> > administrator will have to eventually fix (or just live with it,
> >> confusing
> >> > and with no added value). Would it make sense to align the group-level
> >> > configs with the new boundaries if they fall out of it when a
> >> broker-level
> >> > config is updated? Not sure if this could have other implications, but
> >> > sounds consistent.
> >> >
> >> > Thanks!
> >> > Lianet
> >> >
> >> > On Mon, Jan 19, 2026 at 6:28 AM Andrew Schofield <[email protected]
> >> >
> >> > wrote:
> >> >
> >> > > I have updated the KIP and intend to open voting tomorrow if there
> >> are no
> >> > > additional comments.
> >> > >
> >> > > Thanks,
> >> > > Andrew
> >> > >
> >> > > On 2026/01/13 10:09:44 Andrew Schofield wrote:
> >> > > > Hi Chia-Ping,
> >> > > > That seems like a very sensible approach. I will update the KIP
> >> > > accordingly.
> >> > > >
> >> > > > Thanks,
> >> > > > Andrew
> >> > > >
> >> > > > On 2026/01/09 19:04:47 Chia-Ping Tsai wrote:
> >> > > > > hi Andrew,
> >> > > > >
> >> > > > > I'd like to propose a hybird approach for handling these bounds,
> >> > > treating the broker-level configs as a "safety cap" rather than just a
> >> > > static validator.
> >> > > > >
> >> > > > > here is the logic:
> >> > > > >
> >> > > > > 1. on group config update (strict validation): we validate it
> >> against
> >> > > the current broker-level cap. If it exceed the cap, we reject the
> >> request.
> >> > > This prevents new invalid configs from entering the system
> >> > > > >
> >> > > > > 2. on broker config update (non-blocking): we don't validate
> >> against
> >> > > existing groups. This ensures that admins can tighten limits uring an
> >> > > emergency
> >> > > > >
> >> > > > > 3. at runtime (effective value enforcement): the broker uses the
> >> logic
> >> > > `min(groupConfig, brokerCap). Even if a legacy group config is higher
> >> than
> >> > > the new broker cap (due to step 2), the runtime behavious will be
> >> clamped
> >> > > to the broker cap
> >> > > > >
> >> > > > > WDYT?
> >> > > > >
> >> > > > > Best,
> >> > > > > Chia-Ping
> >> > > > >
> >> > > > > On 2026/01/07 17:24:21 Andrew Schofield wrote:
> >> > > > > > Hi Chia-Ping,
> >> > > > > > Thanks for your comments.
> >> > > > > >
> >> > > > > > chia_00: The group-level configs are all dynamic. This means
> >> that
> >> > > when the limits
> >> > > > > > are reduced, they may already be exceeded by active usage. Over
> >> > > time, as records
> >> > > > > > are delivered and locks are released, the system will settle
> >> within
> >> > > the new limits.
> >> > > > > >
> >> > > > > > chia_01: This is an interesting question and there is some work
> >> off
> >> > > the back of it.
> >> > > > > >
> >> > > > > > For the interval and timeout configs, the broker will fail to
> >> start
> >> > > when the group-level
> >> > > > > > config lies outside the min/max specified by the static broker
> >> > > configs. However, the
> >> > > > > > logging when the broker fails to start is unhelpful because it
> >> omits
> >> > > the group ID of
> >> > > > > > the offending group. This behaviour is common for consumer
> >> groups
> >> > > and share groups.
> >> > > > > > I haven't tried streams groups, but I expect they're the same.
> >> This
> >> > > should be improved
> >> > > > > > in terms of logging at the very least so it's clear what needs
> >> to be
> >> > > done to get the broker
> >> > > > > > started.
> >> > > > > >
> >> > > > > > For share.record.lock.duration.ms, no such validation occurs
> >> as the
> >> > > broker starts. This
> >> > > > > > is an omission. We should have the same behaviour for all of the
> >> > > min/max bounds
> >> > > > > > I think. My view is failing to start the broker is safest for
> >> now.
> >> > > > > >
> >> > > > > > For the new configs in the KIP, the broker should fail to start
> >> if
> >> > > the group-level config
> >> > > > > > is outside the bounds of the min/max static broker configs.
> >> > > > > >
> >> > > > > > wdyt? I'll make a KIP update when I think we have consensus.
> >> > > > > >
> >> > > > > > Thanks,
> >> > > > > > Andrew
> >> > > > > >
> >> > > > > > On 2026/01/05 13:56:16 Chia-Ping Tsai wrote:
> >> > > > > > > hi Andrew
> >> > > > > > >
> >> > > > > > > Thanks for the KIP. I have a few questions regrading the
> >> > > configuration behaviour:
> >> > > > > > >
> >> > > > > > > chia_00: Dynamic Update Behavior
> >> > > > > > > Are these new group-level configuration dynamic?
> >> Specifically, if
> >> > > we alter share.delivery.count.limit or
> >> share.partition.max.record.locks at
> >> > > runtime, will the changes take effect immediately for active share
> >> group?
> >> > > > > > >
> >> > > > > > > chia_01: Configuration Validation on Broker Restart
> >> > > > > > > How does the broker handle existing group configuration that
> >> fall
> >> > > out of bounds after a broker restart? For example, suppose a group has
> >> > > share.partition.max.record.locks set to 100 (which is valid at the
> >> time).
> >> > > If the broker is later restarted with a stricter limit of
> >> > > group.share.max.partition.max.record.locks = 50, how will the group
> >> loaded
> >> > > handle this conflict?
> >> > > > > > >
> >> > > > > > > Best,
> >> > > > > > > Chia-Ping
> >> > > > > > >
> >> > > > > > > On 2025/11/24 21:15:48 Andrew Schofield wrote:
> >> > > > > > > > Hi,
> >> > > > > > > > I’d like to start the discussion on a small KIP which adds
> >> some
> >> > > configurations for share groups which were previously only available
> >> as
> >> > > broker configurations.
> >> > > > > > > >
> >> > > > > > > >
> >> > >
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1240%3A+Additional+group+configurations+for+share+groups
> >> > > > > > > >
> >> > > > > > > > Thanks,
> >> > > > > > > > Andrew
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> 

Reply via email to