It may be worth starting a new thread with regards to the logging situation.

Ismael

On Wed, Jan 10, 2024 at 12:00 PM Mickael Maison <mickael.mai...@gmail.com>
wrote:

> Hi Colin,
>
> Regarding KIP-719, I think need it to land in 3.8 if we want to remove
> the appender in 4.0. I also just noticed the log4j's KafkaAppender is
> being deprecated in log4j2 and will not be part of log4j3.
>
> For KIP-653, as I said, my point was to gauge interest in getting it
> done. While it may not be a "must-do" to keep Kafka working, we can
> only do this type of change in major releases. So if we don't do it
> now, it won't happen for a few more years.
>
> Regarding log4j3, even though the website states it requires Java 11
> [1], it seems the latest beta release requires Java 17 so it's not
> something we'll be able to adopt now.
>
> 0: https://github.com/apache/logging-log4j2/issues/1951
> 1: https://logging.apache.org/log4j/3.x/#requirements
>
> Thanks,
> Mickael
>
> On Fri, Jan 5, 2024 at 12:18 AM Colin McCabe <cmcc...@apache.org> wrote:
> >
> > Hi Mickael,
> >
> > Thanks for bringing this up.
> >
> > The main motivation given in KIP-653 for moving to log4j 2.x is that
> log4j 1.x is no longer supported. But since we moved to reload4j, which is
> still supported, that isn't a concern any longer.
> >
> > To be clear, I'm not saying we shouldn't upgrade, but I'm just trying to
> explain why I think there hasn't been as much interest in this lately. I
> see this as a "cool feature" rather than as a must-do.
> >
> > If we still want to do this for 4.0, it would be good to understand
> whether there's any work that has to land in 3.8. Do we have to get KIP-719
> into 3.8 so that we have a reasonable deprecation period?
> >
> > Also, if we do upgrade, I agree with Ismael that we should consider
> going to log4j3. Assuming they have a non-beta release by the time 4.0 is
> ready.
> >
> > best,
> > Colin
> >
> > On Thu, Jan 4, 2024, at 03:08, Mickael Maison wrote:
> > > Hi Ismael,
> > >
> > > Yes both KIPs have been voted.
> > > My point, which admittedly wasn't clear, was to gauge the interest in
> > > getting them done and if so identifying people to drive these tasks.
> > >
> > > KIP-719 shouldn't require too much more work to complete. There's a PR
> > > [0] which is relatively straightforward. I pinged Lee Dongjin.
> > > KIP-653 is more involved and depends on KIP-719. There's also a PR [1]
> > > which is pretty large.
> > >
> > > Yes log4j3 was on my mind as it's expected to be compatible with
> > > log4j2 and bring significant improvements.
> > >
> > > 0: https://github.com/apache/kafka/pull/10244
> > > 1: https://github.com/apache/kafka/pull/7898
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Thu, Jan 4, 2024 at 11:34 AM Ismael Juma <m...@ismaeljuma.com> wrote:
> > >>
> > >> Hi Mickael,
> > >>
> > >> Given that KIP-653 was accepted, the current position is that we
> would move
> > >> to log4j2 - provided that someone is available to drive that. It's
> also
> > >> worth noting that log4j3 is now a thing (but not yet final):
> > >>
> > >> https://logging.apache.org/log4j/3.x/
> > >>
> > >> Ismael
> > >>
> > >> On Thu, Jan 4, 2024 at 2:15 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > >> wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > I've not seen replies about log4j2.
> > >> > The plan was to deprecated the appender (KIP-719) and switch to
> log4j2
> > >> > (KIP-653).
> > >> >
> > >> > While reload4j works well, I'd still be in favor of switching to
> > >> > log4j2 in Kafka 4.0.
> > >> >
> > >> > Thanks,
> > >> > Mickael
> > >> >
> > >> > On Fri, Dec 29, 2023 at 2:19 AM Colin McCabe <co...@cmccabe.xyz>
> wrote:
> > >> > >
> > >> > > Hi all,
> > >> > >
> > >> > > Let's continue this dicsussion on the "[DISCUSS] KIP-1012: The
> need for
> > >> > a Kafka 3.8.x release" email thread.
> > >> > >
> > >> > > Colin
> > >> > >
> > >> > >
> > >> > > On Tue, Dec 26, 2023, at 12:50, José Armando García Sancio wrote:
> > >> > > > Hi Divij,
> > >> > > >
> > >> > > > Thanks for the feedback. I agree that having a 3.8 release is
> > >> > > > beneficial but some of the comments in this message are
> inaccurate and
> > >> > > > could mislead the community and users.
> > >> > > >
> > >> > > > On Thu, Dec 21, 2023 at 7:00 AM Divij Vaidya <
> divijvaidy...@gmail.com>
> > >> > wrote:
> > >> > > >> 1\ Durability/availability bugs in kraft - Even though kraft
> has been
> > >> > > >> around for a while, we keep finding bugs that impact
> availability and
> > >> > data
> > >> > > >> durability in it almost with every release [1] [2]. It's a
> complex
> > >> > feature
> > >> > > >> and such bugs are expected during the stabilization phase. But
> we
> > >> > can't
> > >> > > >> remove the alternative until we see stabilization in kraft
> i.e. no new
> > >> > > >> stability/durability bugs for at least 2 releases.
> > >> > > >
> > >> > > > I took a look at both of these issues and neither of them are
> bugs
> > >> > > > that affect KRaft's durability and availability.
> > >> > > >
> > >> > > >> [1] https://issues.apache.org/jira/browse/KAFKA-15495
> > >> > > >
> > >> > > > This issue is not specific to KRaft and has been an issue in
> Apache
> > >> > > > Kafka since the ISR leader election and replication algorithm
> was
> > >> > > > added to Apache Kafka. I acknowledge that this misunderstanding
> is
> > >> > > > partially due to the Jira description which insinuates that
> this only
> > >> > > > applies to KRaft which is not true.
> > >> > > >
> > >> > > >> [2] https://issues.apache.org/jira/browse/KAFKA-15489
> > >> > > >
> > >> > > > First, technically this issue was not first discovered in some
> recent
> > >> > > > release. This issue was identified by me back in January of
> 2022:
> > >> > > > https://issues.apache.org/jira/browse/KAFKA-13621. I decided
> to lower
> > >> > > > the priority as it requires a very specific network partition
> where
> > >> > > > the controllers are partitioned from the current leader but the
> > >> > > > brokers are not.
> > >> > > >
> > >> > > > This is not a durability bug as the KRaft cluster metadata
> partition
> > >> > > > leader will not be able to advance the HWM and hence commit
> records.
> > >> > > >
> > >> > > > Regarding availability, The KRaft's cluster metadata partition
> favors
> > >> > > > consistency and partition tolerance versus availability from
> CAP. This
> > >> > > > is by design and not a bug in the protocol or implementation.
> > >> > > >
> > >> > > >> 2\ Parity with Zk - There are also pending bugs [3] which are
> in the
> > >> > > >> category of Zk parity. Removing Zk from Kafka without having
> full
> > >> > feature
> > >> > > >> parity with Zk will leave some Kafka users with no upgrade
> path.
> > >> > > >> 3\ Test coverage - We also don't have sufficient test coverage
> for
> > >> > kraft
> > >> > > >> since quite a few tests are Zk only at this stage.
> > >> > > >>
> > >> > > >> Given these concerns, I believe we need to reach 100% Zk
> parity and
> > >> > allow
> > >> > > >> new feature stabilisation (such as scram, JBOD) for at least 1
> version
> > >> > > >> (maybe more if we find bugs in that feature) before we remove
> Zk. I
> > >> > also
> > >> > > >> agree with the point of view that we can't delay 4.0
> indefinitely and
> > >> > we
> > >> > > >> need a clear cut line.
> > >> > > >
> > >> > > > There seems to be some misunderstanding regarding Apache Kafka
> > >> > > > versioning scheme. Minor versions (e.g. 3.x) are needed for
> feature
> > >> > > > releases like new RPCs and configurations. They are not needed
> for bug
> > >> > > > fixes. Bug fixes can and should be done in patch releases (e.g.
> > >> > > > 3.7.x).
> > >> > > >
> > >> > > > This means that you don't need a 3.8 or 3.9 release to fix a
> bug in
> > >> > Kafka.
> > >> > > >
> > >> > > > Thanks!
> > >> > > > --
> > >> > > > -José
> > >> >
>

Reply via email to