Hi,

I've not seen replies about log4j2.
The plan was to deprecated the appender (KIP-719) and switch to log4j2
(KIP-653).

While reload4j works well, I'd still be in favor of switching to
log4j2 in Kafka 4.0.

Thanks,
Mickael

On Fri, Dec 29, 2023 at 2:19 AM Colin McCabe <co...@cmccabe.xyz> wrote:
>
> Hi all,
>
> Let's continue this dicsussion on the "[DISCUSS] KIP-1012: The need for a 
> Kafka 3.8.x release" email thread.
>
> Colin
>
>
> On Tue, Dec 26, 2023, at 12:50, José Armando García Sancio wrote:
> > Hi Divij,
> >
> > Thanks for the feedback. I agree that having a 3.8 release is
> > beneficial but some of the comments in this message are inaccurate and
> > could mislead the community and users.
> >
> > On Thu, Dec 21, 2023 at 7:00 AM Divij Vaidya <divijvaidy...@gmail.com> 
> > wrote:
> >> 1\ Durability/availability bugs in kraft - Even though kraft has been
> >> around for a while, we keep finding bugs that impact availability and data
> >> durability in it almost with every release [1] [2]. It's a complex feature
> >> and such bugs are expected during the stabilization phase. But we can't
> >> remove the alternative until we see stabilization in kraft i.e. no new
> >> stability/durability bugs for at least 2 releases.
> >
> > I took a look at both of these issues and neither of them are bugs
> > that affect KRaft's durability and availability.
> >
> >> [1] https://issues.apache.org/jira/browse/KAFKA-15495
> >
> > This issue is not specific to KRaft and has been an issue in Apache
> > Kafka since the ISR leader election and replication algorithm was
> > added to Apache Kafka. I acknowledge that this misunderstanding is
> > partially due to the Jira description which insinuates that this only
> > applies to KRaft which is not true.
> >
> >> [2] https://issues.apache.org/jira/browse/KAFKA-15489
> >
> > First, technically this issue was not first discovered in some recent
> > release. This issue was identified by me back in January of 2022:
> > https://issues.apache.org/jira/browse/KAFKA-13621. I decided to lower
> > the priority as it requires a very specific network partition where
> > the controllers are partitioned from the current leader but the
> > brokers are not.
> >
> > This is not a durability bug as the KRaft cluster metadata partition
> > leader will not be able to advance the HWM and hence commit records.
> >
> > Regarding availability, The KRaft's cluster metadata partition favors
> > consistency and partition tolerance versus availability from CAP. This
> > is by design and not a bug in the protocol or implementation.
> >
> >> 2\ Parity with Zk - There are also pending bugs [3] which are in the
> >> category of Zk parity. Removing Zk from Kafka without having full feature
> >> parity with Zk will leave some Kafka users with no upgrade path.
> >> 3\ Test coverage - We also don't have sufficient test coverage for kraft
> >> since quite a few tests are Zk only at this stage.
> >>
> >> Given these concerns, I believe we need to reach 100% Zk parity and allow
> >> new feature stabilisation (such as scram, JBOD) for at least 1 version
> >> (maybe more if we find bugs in that feature) before we remove Zk. I also
> >> agree with the point of view that we can't delay 4.0 indefinitely and we
> >> need a clear cut line.
> >
> > There seems to be some misunderstanding regarding Apache Kafka
> > versioning scheme. Minor versions (e.g. 3.x) are needed for feature
> > releases like new RPCs and configurations. They are not needed for bug
> > fixes. Bug fixes can and should be done in patch releases (e.g.
> > 3.7.x).
> >
> > This means that you don't need a 3.8 or 3.9 release to fix a bug in Kafka.
> >
> > Thanks!
> > --
> > -José

Reply via email to