Re: [ANNOUNCE] New Kafka PMC Member: Tom Bentley

2021-11-19 Thread Dongjin Lee
Congratulations, Tom!

Thanks,
Dongjin

On Sat, Nov 20, 2021 at 9:50 AM Matthias J. Sax  wrote:

> Congrats!
>
> On 11/19/21 2:29 AM, Tom Bentley wrote:
> > Thanks folks!
> >
> > On Fri, Nov 19, 2021 at 5:16 AM Satish Duggana  >
> > wrote:
> >
> >> Congratulations Tom!
> >>
> >> On Fri, 19 Nov 2021 at 05:53, John Roesler  wrote:
> >>
> >>> Congratulations, Tom!
> >>>
> >>> On Thu, Nov 18, 2021, at 17:53, Konstantine Karantasis wrote:
>  Congratulations Tom!
> 
>  Konstantine
> 
> 
>  On Thu, Nov 18, 2021 at 2:44 PM Luke Chen  wrote:
> 
> > Congrats, Tom!
> >
> > Guozhang Wang  於 2021年11月19日 週五 上午1:13 寫道:
> >
> >> Congrats Tom!
> >>
> >> Guozhang
> >>
> >> On Thu, Nov 18, 2021 at 7:49 AM Jun Rao 
> > wrote:
> >>
> >>> Hi, Everyone,
> >>>
> >>> Tom Bentley has been a Kafka committer since Mar. 15,  2021. He
> >> has
> > been
> >>> very instrumental to the community since becoming a committer.
> >> It's
> >>> my
> >>> pleasure to announce that Tom is now a member of Kafka PMC.
> >>>
> >>> Congratulations Tom!
> >>>
> >>> Jun
> >>> on behalf of Apache Kafka PMC
> >>>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
> >
> >>>
> >>
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


[jira] [Created] (KAFKA-13468) Consumers may hang because IOException in Log# does not trigger KafkaStorageException

2021-11-19 Thread Haoze Wu (Jira)
Haoze Wu created KAFKA-13468:


 Summary: Consumers may hang because IOException in Log# does 
not trigger KafkaStorageException
 Key: KAFKA-13468
 URL: https://issues.apache.org/jira/browse/KAFKA-13468
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 2.8.0
Reporter: Haoze Wu


When the Kafka Log class (`core/src/main/scala/kafka/log/Log.scala`) is 
initialized, it may encounter an IO exception in the locally block, e.g., when 
the log directory cannot be created due to permission issue or IOException in  
`initializeLeaderEpochCache`, `initializePartitionMetadata`, etc.

 
{code:java}
class Log(...) {
  // ...
  locally {
    // create the log directory if it doesn't exist
    Files.createDirectories(dir.toPath)

    initializeLeaderEpochCache()
    initializePartitionMetadata()

    val nextOffset = loadSegments()
    // ...
  }
  // ...
}{code}
We found that the broker encountering the IO exception prints an KafkaApi error 
log like the following and proceeds. 

 
{code:java}
[2021-11-17 22:41:30,057] ERROR [KafkaApi-1] Error when handling request: 
clientId=1, correlationId=1, api=LEADER_AND_ISR, version=5, 
body=LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, 
brokerEpoch=4294967362, type=0, ungroupedPartitionStates=[], 
topicStates=[LeaderAndIsrTopicState(topicName='gray-2-0', 
topicId=573bAVHfRQeXApzAKevNIg, 
partitionStates=[LeaderAndIsrPartitionState(topicName='gray-2-0', 
partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 3], 
zkVersion=0, replicas=[1, 3], addingReplicas=[], removingReplicas=[], 
isNew=true)]), LeaderAndIsrTopicState(topicName='gray-1-0', 
topicId=12dW2FxLTiyKmGi41HhdZQ, 
partitionStates=[LeaderAndIsrPartitionState(topicName='gray-1-0', 
partitionIndex=1, controllerEpoch=1, leader=3, leaderEpoch=0, isr=[3, 1], 
zkVersion=0, replicas=[3, 1], addingReplicas=[], removingReplicas=[], 
isNew=true)]), LeaderAndIsrTopicState(topicName='gray-3-0', 
topicId=_yvmANyZSoK_PTV0e-nqCA, 
partitionStates=[LeaderAndIsrPartitionState(topicName='gray-3-0', 
partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 3], 
zkVersion=0, replicas=[1, 3], addingReplicas=[], removingReplicas=[], 
isNew=true)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, 
hostName='localhost', port=9791), LeaderAndIsrLiveLeader(brokerId=3, 
hostName='localhost', port=9793)]) (kafka.server.RequestHandlerHelper) {code}
But all the consumers that are consuming data from the affected topics 
(“gray-2-0”, “gray-1-0”, “gray-3-0”) are not able to proceed. These consumers 
don’t have any error log related to this issue. They hang for more than 3 
minutes.

 

The IOException sometimes affects multiple offset topics:

 
{code:java}
[2021-11-18 10:57:41,289] ERROR [KafkaApi-1] Error when handling request: 
clientId=1, correlationId=11, api=LEADER_AND_ISR, version=5, 
body=LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, 
brokerEpoch=4294967355, type=0, ungroupedPartitionStates=[], 
topicStates=[LeaderAndIsrTopicState(topicName='__consumer_offsets', 
topicId=_MiMTCViS76osIyDdxekIg, 
partitionStates=[LeaderAndIsrPartitionState(topicName='__consumer_offsets', 
partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], 
zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, 
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], 
addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, 
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], 
addingReplicas=[], removingReplicas=[], isNew=true), ...
addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, 
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], 
addingReplicas=[], removingReplicas=[], isNew=true)])], 
liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', 
port=9791)]) (kafka.server.RequestHandlerHelper) {code}
*Analysis*

 

The key stacktrace is as follows:
{code:java}
"java.lang.Thread,run,748",
"kafka.server.KafkaRequestHandler,run,74",
"kafka.server.KafkaApis,handle,236",
"kafka.server.KafkaApis,handleLeaderAndIsrRequest,258",
"kafka.server.ReplicaManager,becomeLeaderOrFollower,1411",
"kafka.server.ReplicaManager,makeLeaders,1566",
"scala.collection.mutable.HashMap,foreachEntry,499",
"scala.collection.mutable.HashMap$Node,foreachEntry,633",
"kafka.utils.Implicits$MapExtensionMethods$,$anonfun$forKeyValue$1,62",
"kafka.server.ReplicaManager,$anonfun$makeLeaders$5,1568",
"kafka.cluster.Partition,makeLeader,548",
"kafka.cluster.Partition,$anonfun$makeLeader$1,564",
"kafka.cluster.Partition,createLogIfNotExists,324",

Re: [ANNOUNCE] New Kafka PMC Member: Tom Bentley

2021-11-19 Thread Matthias J. Sax

Congrats!

On 11/19/21 2:29 AM, Tom Bentley wrote:

Thanks folks!

On Fri, Nov 19, 2021 at 5:16 AM Satish Duggana 
wrote:


Congratulations Tom!

On Fri, 19 Nov 2021 at 05:53, John Roesler  wrote:


Congratulations, Tom!

On Thu, Nov 18, 2021, at 17:53, Konstantine Karantasis wrote:

Congratulations Tom!

Konstantine


On Thu, Nov 18, 2021 at 2:44 PM Luke Chen  wrote:


Congrats, Tom!

Guozhang Wang  於 2021年11月19日 週五 上午1:13 寫道:


Congrats Tom!

Guozhang

On Thu, Nov 18, 2021 at 7:49 AM Jun Rao 

wrote:



Hi, Everyone,

Tom Bentley has been a Kafka committer since Mar. 15,  2021. He

has

been

very instrumental to the community since becoming a committer.

It's

my

pleasure to announce that Tom is now a member of Kafka PMC.

Congratulations Tom!

Jun
on behalf of Apache Kafka PMC




--
-- Guozhang











Re: Errors thrown from a KStream transformer are swallowed, eg. StackOverflowError

2021-11-19 Thread Matthias J. Sax

Not sure what version you are using, but it say `Thrwoable` in `trunk`

https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L577


-Matthias

On 11/18/21 6:09 AM, John Roesler wrote:

Thanks for pointing that out, Scott!

You’re totally right; that should be a Throwable.

Just to put it out there, do you want to just send a quick PR? If not, no 
worries. I’m just asking because it seems like you’ve already done the hard 
part and it might be nice to get the contribution credit.

Thanks,
John

On Thu, Nov 18, 2021, at 08:00, Sinclair Scott wrote:

Hi there,


I'm a big fan of KStreams - thanks for all the great work!!


I unfortunately (my fault) had a StackOverflowError bug in my KStream
transformer which meant that the KStream died without reporting any
Exception at all.


The first log message showed some polling activity and then you see
later the State transition to PENDING_SHUTDOWN


Main Consumer poll completed in 2 ms and fetched 1 records
Flushing all global globalStores registered in the state manager
Idempotently invoking restoration logic in state RUNNING
Finished restoring all changelogs []
Idempotent restore call done. Thread state has not changed.
Processing tasks with 1 iterations.
Flushing all global globalStores registered in the state manager
State transition from RUNNING to PENDING_SHUTDOWN



This is because the StreamThread.run() method catches Exception only.


I ended up recompiling the kstreams and changing the catch to Throwable
so I can see what was going on. Then I discovered my bad recursive call
  :(


Can we please change the Catch to catch Throwable , so that we are
always guaranteed some output?


StreamThread.java

@Override
public void run() {
 log.info("Starting");
 if (setState(State.STARTING) == null) {
 log.info("StreamThread already shutdown. Not running");
 return;
 }
 boolean cleanRun = false;
 try {
 runLoop();
 cleanRun = true;
 } catch (final Exception e) {
 // we have caught all Kafka related exceptions, and other
runtime exceptions
 // should be due to user application errors

 if (e instanceof UnsupportedVersionException) {
 final String errorMessage = e.getMessage();
 if (errorMessage != null &&
 errorMessage.startsWith("Broker unexpectedly doesn't
support requireStable flag on version ")) {

 log.error("Shutting down because the Kafka cluster
seems to be on a too old version. " +
 "Setting {}=\"{}\" requires broker version 2.5 or
higher.",
 StreamsConfig.PROCESSING_GUARANTEE_CONFIG,
 EXACTLY_ONCE_BETA);

 throw e;
 }
 }

 log.error("Encountered the following exception during processing " +
 "and the thread is going to shut down: ", e);
 throw e;
 } finally {
 completeShutdown(cleanRun);
 }
}


Thanks and kind regards


Scott Sinclair


Re: Do we want to add more SMTs to Apache Kafka?

2021-11-19 Thread Brandon Brown
I agree, if the desire is to keep the internal SMTs collection small then 
providing an ease of discovery like Gunnar suggestions would be extremely 
helpful. 

Brandon Brown

> On Nov 19, 2021, at 6:13 PM, Gunnar Morling 
>  wrote:
> 
> Hi all,
> 
> Just came across this thread, I hope the late reply is ok.
> 
> FWIW, we're in a similar situation in Debezium, where users often request
> new (Debezium-specific) SMTs, and we generally tend to recommend them to be
> maintained by users themselves, unless they are truly generic. This
> excludes a share of users though who aren't Java developers.
> 
> What might help is having means of simple discoverability of externally
> hosted SMTs, e.g. via some kind of catalog hosted on kafka.apache.org. That
> way, people would have it easier to find and obtain SMTs from other places,
> reducing the pressure to get them added to Apache Kafka proper.
> 
> Best,
> 
> --Gunnar
> 
> 
> 
> 
>> Am So., 7. Nov. 2021 um 21:49 Uhr schrieb Brandon Brown <
>> bran...@bbrownsound.com>:
>> 
>> I like the idea of a select number of SMTs being offered and supported out
>> of the box. The addition of SMTs via this process is nice because it allows
>> for a rich set to be supported out of the box and without the need for
>> extra work to deploy.
>> 
>> Perhaps this is a spot where the community could express the interest of
>> additional SMTs which maybe are available via an open source library and if
>> enough usage occurs there could be a path to fold into the Kafka project at
>> large?
>> 
>> Brandon Brown
>> 
>> 
 On Nov 7, 2021, at 1:19 PM, Randall Hauch  wrote:
>>> 
>>> We have had several requests to add more Connect Single Message
>>> Transforms (SMTs) to the project. When SMTs were first introduced with
>>> KIP-66 (ref 1) in Jun 2017, the KIP mentioned the following:
>>> 
 Criteria: SMTs that are shipped with Kafka Connect should be general
>> enough to apply to many data sources & serialization formats. They should
>> also be simple enough to not cause any additional library dependency to be
>> introduced.
 Beyond those being initially included with this KIP, transformations
>> can be adopted for inclusion in future with JIRA/ML discussion to weigh the
>> tradeoffs.
>>> 
>>> In the 4+ years that we've had SMTs in the project, we've only
>>> enhanced the framework with KIP-585 (ref 2), and fixed the initial
>>> SMTs (including KIP-437, ref 3). We recently have had quite a few
>>> requests to add new SMTs; a few samples of these include:
>>> * https://issues.apache.org/jira/browse/KAFKA-10299
>>> * https://issues.apache.org/jira/browse/KAFKA-9436
>>> * https://issues.apache.org/jira/browse/KAFKA-9318
>>> * https://issues.apache.org/jira/browse/KAFKA-12443
>>> 
>>> Adding new or changing existing SMTs to the Apache Kafka project come
>>> with requirements. First, AK releases are infrequent and necessarily
>>> involve the entire project. Second, adding an SMT is an API change and
>>> therefore requires a KIP. Third, all changes in behavior to SMTs
>>> included in an prior AK release must be backward compatible, and
>>> adding or changing an SMT's configuration requires a KIP. This last
>>> one is also challenging if we're limiting ourselves to truly general
>>> SMTs, since these are notoriously difficult to get right the first
>>> time. All of these aspects mean that it's difficult to add, maintain,
>>> and evolve/improve SMTs in AK. And unless a bug fix is critical, we're
>>> likely not to create a patch release for AK just to fix a bug in an
>>> SMT, simply because of the effort involved.
>>> 
>>> On the other hand, anyone can easily implement their own SMT and
>>> deploy them as a Connect plugin, whether that's part of a connector
>>> plugin or a separate plugin dedicated for one or SMTs. Interestingly,
>>> it's far simpler to implement and maintain custom SMTs outside of AK,
>>> especially since those plugins can be released and deployed in any
>>> Connect runtime version since at least 0.11.0. And if custom SMTs are
>>> maintained in a relatively small project, they can be released often.
>>> 
>>> Finally, KIP-26 (ref 4) specifically rejected maintaining connector
>>> implementations in the AK project. So we have precedence for choosing
>>> not to accept implementations.
>>> 
>>> Given the above, I wonder if the time has come for us to prefer only
>>> maintaining the SMT framework and existing SMTs, and to decline adding
>>> new SMTs.
>>> 
>>> Thoughts?
>>> 
>>> Best regards,
>>> 
>>> Randall Hauch
>>> 
>>> (1)
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-66%3A+Single+Message+Transforms+for+Kafka+Connect
>>> (2)
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-585%3A+Filter+and+Conditional+SMTs
>>> (3)
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-437%3A+Custom+replacement+for+MaskField+SMT
>>> (4)
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767
>> 


Re: Do we want to add more SMTs to Apache Kafka?

2021-11-19 Thread Gunnar Morling
Hi all,

Just came across this thread, I hope the late reply is ok.

FWIW, we're in a similar situation in Debezium, where users often request
new (Debezium-specific) SMTs, and we generally tend to recommend them to be
maintained by users themselves, unless they are truly generic. This
excludes a share of users though who aren't Java developers.

What might help is having means of simple discoverability of externally
hosted SMTs, e.g. via some kind of catalog hosted on kafka.apache.org. That
way, people would have it easier to find and obtain SMTs from other places,
reducing the pressure to get them added to Apache Kafka proper.

Best,

--Gunnar




Am So., 7. Nov. 2021 um 21:49 Uhr schrieb Brandon Brown <
bran...@bbrownsound.com>:

> I like the idea of a select number of SMTs being offered and supported out
> of the box. The addition of SMTs via this process is nice because it allows
> for a rich set to be supported out of the box and without the need for
> extra work to deploy.
>
> Perhaps this is a spot where the community could express the interest of
> additional SMTs which maybe are available via an open source library and if
> enough usage occurs there could be a path to fold into the Kafka project at
> large?
>
> Brandon Brown
>
>
> > On Nov 7, 2021, at 1:19 PM, Randall Hauch  wrote:
> >
> > We have had several requests to add more Connect Single Message
> > Transforms (SMTs) to the project. When SMTs were first introduced with
> > KIP-66 (ref 1) in Jun 2017, the KIP mentioned the following:
> >
> >> Criteria: SMTs that are shipped with Kafka Connect should be general
> enough to apply to many data sources & serialization formats. They should
> also be simple enough to not cause any additional library dependency to be
> introduced.
> >> Beyond those being initially included with this KIP, transformations
> can be adopted for inclusion in future with JIRA/ML discussion to weigh the
> tradeoffs.
> >
> > In the 4+ years that we've had SMTs in the project, we've only
> > enhanced the framework with KIP-585 (ref 2), and fixed the initial
> > SMTs (including KIP-437, ref 3). We recently have had quite a few
> > requests to add new SMTs; a few samples of these include:
> > * https://issues.apache.org/jira/browse/KAFKA-10299
> > * https://issues.apache.org/jira/browse/KAFKA-9436
> > * https://issues.apache.org/jira/browse/KAFKA-9318
> > * https://issues.apache.org/jira/browse/KAFKA-12443
> >
> > Adding new or changing existing SMTs to the Apache Kafka project come
> > with requirements. First, AK releases are infrequent and necessarily
> > involve the entire project. Second, adding an SMT is an API change and
> > therefore requires a KIP. Third, all changes in behavior to SMTs
> > included in an prior AK release must be backward compatible, and
> > adding or changing an SMT's configuration requires a KIP. This last
> > one is also challenging if we're limiting ourselves to truly general
> > SMTs, since these are notoriously difficult to get right the first
> > time. All of these aspects mean that it's difficult to add, maintain,
> > and evolve/improve SMTs in AK. And unless a bug fix is critical, we're
> > likely not to create a patch release for AK just to fix a bug in an
> > SMT, simply because of the effort involved.
> >
> > On the other hand, anyone can easily implement their own SMT and
> > deploy them as a Connect plugin, whether that's part of a connector
> > plugin or a separate plugin dedicated for one or SMTs. Interestingly,
> > it's far simpler to implement and maintain custom SMTs outside of AK,
> > especially since those plugins can be released and deployed in any
> > Connect runtime version since at least 0.11.0. And if custom SMTs are
> > maintained in a relatively small project, they can be released often.
> >
> > Finally, KIP-26 (ref 4) specifically rejected maintaining connector
> > implementations in the AK project. So we have precedence for choosing
> > not to accept implementations.
> >
> > Given the above, I wonder if the time has come for us to prefer only
> > maintaining the SMT framework and existing SMTs, and to decline adding
> > new SMTs.
> >
> > Thoughts?
> >
> > Best regards,
> >
> > Randall Hauch
> >
> > (1)
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-66%3A+Single+Message+Transforms+for+Kafka+Connect
> > (2)
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-585%3A+Filter+and+Conditional+SMTs
> > (3)
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-437%3A+Custom+replacement+for+MaskField+SMT
> > (4)
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767
>


Re: [DISCUSS] KIP-794: Strictly Uniform Sticky Partitioner

2021-11-19 Thread Artem Livshits
Hello,

During implementation it turned out that the existing Partitioner.partition
method doesn't have enough arguments to accurately estimate record size in
bytes (e.g. it doesn't have headers, cannot take compression into account,
etc.).  So I'm proposing to add a new Partitioner.partition method that
takes a callback that can be used to estimate record size.

I've updated the KIP correspondingly
https://cwiki.apache.org/confluence/display/KAFKA/KIP-794%3A+Strictly+Uniform+Sticky+Partitioner

-Artem

On Mon, Nov 8, 2021 at 5:42 PM Artem Livshits 
wrote:

> Hi Luke, Justine,
>
> Thank you for feedback and questions. I've added clarification to the KIP.
>
> > there will be some period of time the distribution is not even.
>
> That's correct.  There would be a small temporary imbalance, but over time
> the distribution should be uniform.
>
> > 1. This paragraph is a little confusing, because there's no "batch mode"
> or "non-batch mode", right?
>
> Updated the wording to not use "batch mode"
>
> > 2. In motivation, you mentioned 1 drawback of current
> UniformStickyPartitioner is
>
> The problem with the current implementation is that it switches once a new
> batch is created which may happen after the first record when linger.ms=0.
> The new implementation won't switch after the batch, so even if the first
> record got sent out in a batch, the second record would be produced to the
> same partition.  Once we have 5 batches in-flight, the new records will
> pile up in the accumulator.
>
> > I was curious about how the logic automatically switches here.
>
> Added some clarifications to the KIP.  Basically, because we can only have
> 5 in-flight batches, as soon as the first 5 are in-flight, the records
> start piling up in the accumulator.  If the rate is low, records get sent
> quickly (e.g. if we have latency 50ms, and produce less than 20 rec / sec,
> then each record will often get sent in its own batch, because a batch
> would often complete before a new record arrives).  If the rate is high,
> then the first few records get sent quickly, but then records will batch
> together until one of the in-flight batches completes, the higher the rate
> is (or the higher latency is), the larger the batches are.
>
> This is not a new logic, btw, this is how it works now, the new logic just
> helps to utilize this better by giving the partition an opportunity to hit
> 5 in-flight and start accumulating sooner.  KIP-782 will make this even
> better, so batches could also grow beyond 16KB if production rate is high.
>
> -Artem
>
>
> On Mon, Nov 8, 2021 at 11:56 AM Justine Olshan
>  wrote:
>
>> Hi Artem,
>> Thanks for working on improving the Sticky Partitioner!
>>
>> I had a few questions about this portion:
>>
>> *The batching will continue until either an in-flight batch completes or
>> we
>> hit the N bytes and move to the next partition.  This way it takes just 5
>> records to get to batching mode, not 5 x number of partition records, and
>> the batching mode will stay longer as we'll be batching while waiting for
>> a
>> request to be completed.  As the production rate accelerates, the logic
>> will automatically switch to use larger batches to sustain higher
>> throughput.*
>>
>> *If one of the brokers has higher latency the records for the partitions
>> hosted on that broker are going to form larger batches, but it's still
>> going to be the same *amount records* sent less frequently in larger
>> batches, the logic automatically adapts to that.*
>>
>> I was curious about how the logic automatically switches here. It seems
>> like we are just adding *partitioner.sticky.batch.size *which seems like a
>> static value. Can you go into more detail about this logic? Or clarify
>> something I may have missed.
>>
>> On Mon, Nov 8, 2021 at 1:34 AM Luke Chen  wrote:
>>
>> > Thanks Artem,
>> > It's much better now.
>> > I've got your idea. In KIP-480: Sticky Partitioner, we'll change
>> partition
>> > (call partitioner) when either 1 of below condition match
>> > 1. the batch is full
>> > 2. when linger.ms is up
>> > But, you are changing the definition, into a
>> > "partitioner.sticky.batch.size" size is reached.
>> >
>> > It'll fix the uneven distribution issue, because we did the sent out
>> size
>> > calculation in the producer side.
>> > But it might have another issue that when the producer rate is low,
>> there
>> > will be some period of time the distribution is not even. Ex:
>> > tp-1: 12KB
>> > tp-2: 0KB
>> > tp-3: 0KB
>> > tp-4: 0KB
>> > while the producer is still keeping sending records into tp-1 (because
>> we
>> > haven't reached the 16KB threshold)
>> > Maybe the user should set a good value to
>> "partitioner.sticky.batch.size"
>> > to fix this issue?
>> >
>> > Some comment to the KIP:
>> > 1. This paragraph is a little confusing, because there's no "batch
>> mode" or
>> > "non-batch mode", right?
>> >
>> > > The batching will continue until either an in-flight batch completes
>> or
>> > we hit the N 

[VOTE] KIP-798 Add possibility to write kafka headers in Kafka Console Producer

2021-11-19 Thread flo





Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-11-19 Thread Jun Rao
Hi, David,

Thanks for the reply. The new CLI sounds reasonable to me.

16.
For case C, choosing the latest version sounds good to me.
For case B, for metadata.version, we pick version 1 since it just happens
that for metadata.version version 1 is backward compatible. How do we make
this more general for other features?

21. Do you still plan to change "delete" to "disable" in the CLI?

Thanks,

Jun



On Thu, Nov 18, 2021 at 11:50 AM David Arthur
 wrote:

> Jun,
>
> The KIP has some changes to the CLI for KIP-584. With Jason's suggestion
> incorporated, these three commands would look like:
>
> 1) kafka-features.sh upgrade --release latest
> upgrades all known features to their defaults in the latest release
>
> 2) kafka-features.sh downgrade --release 3.x
> downgrade all known features to the default versions from 3.x
>
> 3) kafka-features.sh describe --release latest
> Describe the known features of the latest release
>
> The --upgrade-all and --downgrade-all are replaced by specific each
> feature+version or giving the --release argument. One problem with
> --downgrade-all is it's not clear what the target versions should be: the
> previous version before the last upgrade -- or the lowest supported
> version. Since downgrades will be less common, I think it's better to make
> the operator be more explicit about the desired downgrade version (either
> through [--feature X --version Y] or [--release 3.1]). Does that seem
> reasonable?
>
> 16. For case C, I think we will want to always use the latest
> metadata.version for new clusters (like we do for IBP). We can always
> change what we mean by "default" down the road. Also, this will likely
> become a bit of information we include in release and upgrade notes with
> each release.
>
> -David
>
> On Thu, Nov 18, 2021 at 1:14 PM Jun Rao  wrote:
>
> > Hi, Jason, David,
> >
> > Just to clarify on the interaction with the end user, the design in
> KIP-584
> > allows one to do the following.
> > (1) kafka-features.sh  --upgrade-all --bootstrap-server
> > kafka-broker0.prn1:9071 to upgrade all features to the latest version
> known
> > by the tool. The user doesn't need to know a specific feature version.
> > (2) kafka-features.sh  --downgrade-all --bootstrap-server
> > kafka-broker0.prn1:9071 to downgrade all features to the latest version
> > known by the tool. The user doesn't need to know a specific feature
> > version.
> > (3) kafka-features.sh  --describe --bootstrap-server
> > kafka-broker0.prn1:9071 to find out the supported version for each
> feature.
> > This allows the user to upgrade each feature individually.
> >
> > Most users will be doing (1) and occasionally (2), and won't need to know
> > the exact version of each feature.
> >
> > 16. For case C, what's the default version? Is it always the latest?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Nov 18, 2021 at 8:15 AM David Arthur
> >  wrote:
> >
> > > Colin, thanks for the detailed response. I understand what you're
> saying
> > > and I agree with your rationale.
> > >
> > > It seems like we could just initialize their cluster.metadata to 1 when
> > the
> > > > software is upgraded to 3.2.
> > > >
> > >
> > > Concretely, this means the controller would see that there is no
> > > FeatureLevelRecord in the log, and so it would bootstrap one. Normally
> > for
> > > cluster initialization, this would be read from meta.properties, but in
> > the
> > > case of preview clusters we would need to special case it to initialize
> > the
> > > version to 1.
> > >
> > > Once the new FeatureLevelRecord has been committed, any nodes (brokers
> or
> > > controllers) that are running the latest software will react to the new
> > > metadata.version. This means we will need to make this initial version
> > of 1
> > > be backwards compatible to what we have in 3.0 and 3.1 (since some
> > brokers
> > > will be on the new software and some on older/preview versions)
> > >
> > > I agree that auto-upgrading preview users from IBP 3.0 to
> > metadata.version
> > > 1 (equivalent to IBP 3.2) is probably fine.
> > >
> > > Back to Jun's case B:
> > >
> > > b. We upgrade from an old version where no metadata.version has been
> > > > finalized. In this case, it makes sense to leave metadata.version
> > > disabled
> > > > since we don't know if all brokers have been upgraded.
> > >
> > >
> > > Instead of leaving it uninitialized, we would initialize it to 1 which
> > > would be backwards compatible to 3.0 and 3.1. This would eliminate the
> > need
> > > to check a "final IBP" or deal with 3.2+ clusters without a
> > > metadata.version set. The downgrade path for 3.2 back to a preview
> > release
> > > should be fine since we are saying that metadata.version 1 is
> compatible
> > > with those releases. The FeatureLevelRecord would exist, but it would
> be
> > > ignored (we might need to make sure this actually works in 3.0).
> > >
> > > For clarity, here are the three workflows of Jun's three cases:
> > >
> > > A (KRaft 

[jira] [Resolved] (KAFKA-13405) Kafka Streams restore-consumer fails to refresh broker IPs after upgrading Kafka cluster

2021-11-19 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-13405.
-
Resolution: Not A Bug

> Kafka Streams restore-consumer fails to refresh broker IPs after upgrading 
> Kafka cluster
> 
>
> Key: KAFKA-13405
> URL: https://issues.apache.org/jira/browse/KAFKA-13405
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Daniel O'Halloran
>Priority: Critical
> Attachments: KafkaConfig.txt, KafkaLogs.txt
>
>
> *+Description+*
> After upgrading our Kafka clusters from 2.7 to 2.8 the Streams 
> restore-consumers never update their broker IPs.
> The applications continue to process events as normal, until there is a 
> rebalance.
> Once a rebalance occurs the restore consumers attempts to connect to the old 
> brokers IPs indefinitely and the streams tasks never go back into a RUNNING 
> state.
> We were able to replicate this behaviour with kafka-streams client libraries 
> 2.5.1, 2.7.1 and 2.8.0
>  
> *+Steps to reproduce+*
>  # Upgrade brokers from Kafka 2.7 to Kafka 2.8
>  # Ensure old brokers are completely shut down
>  # Trigger a rebalance of a streams application
>  
> *+Expected result+*
>  * Application rebalances as normal
>  
> *+Actual Result+*
>  * Application cannot restore its data
>  * restore consumer tries to connect to old brokers indefinitely
>  
> *+Observations+*
>  * The cluster metadata was updated on all stream consumer threads during the 
> broker upgrade (multiple times) as the new brokers were brought online 
> (corresponding to leader election occurring on the subscribed partitions), 
> however no cluster metadata was logged on the restore-consumer thread.
>  * None of the original broker IPs are valid/accessible after the upgrade (as 
> expected)
>  * No partition rebalance occurs during the kafka upgrade process.
>  * When the first re-balance was triggered after upgrade, the 
> restore-consumer loops failing to connect on each of the 3 original IPs, but 
> none of the new broker IPs.
>  
> *+Workaround+*
>  * Restart our applications after upgrading our Kafka cluster



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13467) Clients never refresh cached bootstrap IPs

2021-11-19 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-13467:
---

 Summary: Clients never refresh cached bootstrap IPs
 Key: KAFKA-13467
 URL: https://issues.apache.org/jira/browse/KAFKA-13467
 Project: Kafka
  Issue Type: Improvement
  Components: clients, network
Reporter: Matthias J. Sax


Follow up ticket to https://issues.apache.org/jira/browse/KAFKA-13405.

For certain broker rolling upgrade scenarios, it would be beneficial to expired 
cached bootstrap server IP addresses and re-resolve those IPs to allow clients 
to re-connect to the cluster without the need to restart the client.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka-site] mumrah merged pull request #384: Update David Arthur photo

2021-11-19 Thread GitBox


mumrah merged pull request #384:
URL: https://github.com/apache/kafka-site/pull/384


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE} KIP-796: Interactive Query v2

2021-11-19 Thread Vasiliki Papavasileiou
I think this KIP will greatly improve how we handle IQ in streams so +1
(non-binding) from me.

Thank you John!

On Thu, Nov 18, 2021 at 9:45 PM Patrick Stuedi 
wrote:

> +1 (non-binding), thanks John!
> -Patrick
>
> On Thu, Nov 18, 2021 at 12:27 AM John Roesler  wrote:
>
> > Hello all,
> >
> > I'd like to open the vote for KIP-796, which proposes
> > a revamp of the Interactive Query APIs in Kafka Streams.
> >
> > The proposal is here:
> > https://cwiki.apache.org/confluence/x/34xnCw
> >
> > Thanks to all who reviewed the proposal, and thanks in
> > advance for taking the time to vote!
> >
> > Thank you,
> > -John
> >
>


[jira] [Resolved] (KAFKA-9648) Add configuration to adjust listen backlog size for Acceptor

2021-11-19 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-9648.

Fix Version/s: 3.2.0
 Reviewer: David Jacot
   Resolution: Fixed

> Add configuration to adjust listen backlog size for Acceptor
> 
>
> Key: KAFKA-9648
> URL: https://issues.apache.org/jira/browse/KAFKA-9648
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: li xiangyuan
>Assignee: Haruki Okada
>Priority: Minor
> Fix For: 3.2.0
>
>
> I have describe a mystery problem 
> (https://issues.apache.org/jira/browse/KAFKA-9211). This issue I found kafka 
> server will trigger tcp Congestion Control in some condition. finally we 
> found the root cause.
> when kafka server restart for any reason and then execute preferred replica 
> leader, lots of replica leader will give back to it & trigger cluster 
> metadata update. then all clients will establish connection to this server. 
> at the monment many tcp estable request are waiting in the tcp sync queue , 
> and then to accept queue. 
> kafka create serversocket in SocketServer.scala 
>  
> {code:java}
> serverChannel.socket.bind(socketAddress);{code}
> this method has second parameter "backlog", min(backlog,tcp_max_syn_backlog) 
> will decide the queue length.beacues kafka haven't set ,it is default value 
> 50.
> if this queue is full, and tcp_syncookies = 0, then new connection request 
> will be rejected. If tcp_syncookies=1, it will trigger the tcp synccookie 
> mechanism. this mechanism could allow linux handle more tcp sync request, but 
> it would lose many tcp external parameter, include "wscale", the one that 
> allow tcp connection to send much more bytes per tcp package. because 
> syncookie triggerd, wscale has lost, and this tcp connection will handle 
> network very slow, forever,until this connection is closed and establish 
> another tcp connection.
> so after a preferred repilca executed, lots of new tcp connection will 
> establish without set wscale,and many network traffic to this server will 
> have a very slow speed.
> i'm not sure whether new linux version have resolved this problem, but kafka 
> also should set backlog a larger value. we now have modify this to 512, seems 
> everything is ok.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: KIP-800: Add reason to LeaveGroupRequest

2021-11-19 Thread David Jacot
I have updated the KIP.

Best,
David

On Fri, Nov 19, 2021 at 3:00 PM David Jacot  wrote:
>
> Thank you all for your feedback. Let me address all your points below.
>
> Luke,
> 1. I use a tag field so bumping the request version is not necessary. In
> this case, using a tag field does not seem to be the best approach so
> I will use a regular one and bump the version.
> 2. Sounds good. I will fix that comment.
> 3. That is a good question. My intent was to avoid getting weird or cryptic
> reasons logged on the broker so I thought that having a standard one is
> better. As Sophie suggested something similar for the `enforceRebalance`
> API, we could do it for both, I suppose.
>
> Ismael,
> 1. That's a good point. I chose to use a tag field to avoid having to bump
> the request version. In this particular case, it seems that bumping the
> version does not cost much so it is perhaps better. I will change this.
>
> Sophie,
> 1. That's a pretty good idea, thanks. Let me update the KIP to include
> a reason in the JoinGroup request. Regarding the Consumer API, do
> you think that there is value for KStreams to expose the possibility to
> provide a reason? Otherwise, it might be better to use a default
> reason in this case.
> 2. I don't fully get your point about providing the reason to the group
> leader assignor on the client. Do you think that we should propagate
> it to all the consumers or to the leader as well? The user usually has
> access to all its client logs so I am not sure that it is really necessary.
> Could you elaborate?
>
> I will update the KIP soon.
>
> Best,
> David
>
> On Sat, Nov 13, 2021 at 6:00 AM Sophie Blee-Goldman
>  wrote:
> >
> > This sounds great, thanks David.
> >
> > One thought: what do you think about doing something similar for the
> > JoinGroup request?
> >
> > When you only have broker logs and not client logs, it's somewhere between
> > challenging and
> > impossible to determine the reason for a rebalance that was triggered
> > explicitly by the client or
> > even the user. For example, when a followup rebalance is requested to
> > assign the revoked
> > partitions after a cooperative rebalance. Or any of the many reasons we
> > trigger a rebalance
> > in Kafka Streams, via the #enforceRebalance API.
> >
> > Perhaps we could add a parameter to that method as such:
> >
> > public void enforceRebalance(final String reason);
> >
> > Then we can add that to the JoinGroup request/ConsumerProtocol. Not only
> > would it help to
> > log this reason on the broker side, the information about who requested the
> > rebalance and why
> > could be extremely useful to have available to the group leader/partition
> > assignor on the client
> > side.
> >
> > Cheers,
> > Sophie
> >
> > On Fri, Nov 12, 2021 at 10:05 AM Ismael Juma  wrote:
> >
> > > Thanks David, this is a worthwhile improvement. Quick question, why did we
> > > pick a tagged field here?
> > >
> > > Ismael
> > >
> > > On Thu, Nov 11, 2021, 8:32 AM David Jacot 
> > > wrote:
> > >
> > > > Hi folks,
> > > >
> > > > I'd like to discuss this very small KIP which proposes to add a reason
> > > > field
> > > > to the LeaveGroupRequest in order to let the broker know why a member
> > > > left the group. This would be really handy for administrators.
> > > >
> > > > KIP: https://cwiki.apache.org/confluence/x/eYyqCw
> > > >
> > > > Cheers,
> > > > David
> > > >
> > >


Re: [VOTE] KIP-779: Allow Source Tasks to Handle Producer Exceptions

2021-11-19 Thread Knowles Atchison Jr
Thank you all for voting. We still need two more binding votes.

I have rebased and updated the PR to be ready to go once this vote passes:

https://github.com/apache/kafka/pull/11382

Knowles

On Tue, Nov 16, 2021 at 3:43 PM Chris Egerton 
wrote:

> +1 (non-binding). Thanks Knowles!
>
> On Tue, Nov 16, 2021 at 10:48 AM Arjun Satish 
> wrote:
>
> > +1 (non-binding). Thanks for the KIP, Knowles! and appreciate the
> > follow-ups!
> >
> > On Thu, Nov 11, 2021 at 2:55 PM John Roesler 
> wrote:
> >
> > > Thanks, Knowles!
> > >
> > > I'm +1 (binding)
> > >
> > > -John
> > >
> > > On Wed, 2021-11-10 at 12:42 -0500, Christopher Shannon
> > > wrote:
> > > > +1 (non-binding). This looks good to me and will be useful as a way
> to
> > > > handle producer errors.
> > > >
> > > > On Mon, Nov 8, 2021 at 8:55 AM Knowles Atchison Jr <
> > > katchiso...@gmail.com>
> > > > wrote:
> > > >
> > > > > Good morning,
> > > > >
> > > > > I'd like to start a vote for KIP-779: Allow Source Tasks to Handle
> > > Producer
> > > > > Exceptions:
> > > > >
> > > > >
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-779%3A+Allow+Source+Tasks+to+Handle+Producer+Exceptions
> > > > >
> > > > > The purpose of this KIP is to allow Source Tasks the option to
> > "ignore"
> > > > > kafka producer exceptions. After a few iterations, this is now part
> > of
> > > the
> > > > > "errors.tolerance" configuration and provides a null RecordMetadata
> > to
> > > > > commitRecord() in lieu of a new SourceTask interface method or
> worker
> > > > > configuration item.
> > > > >
> > > > > PR is here:
> > > > >
> > > > > https://github.com/apache/kafka/pull/11382
> > > > >
> > > > > Any comments and feedback are welcome.
> > > > >
> > > > > Knowles
> > > > >
> > >
> > >
> > >
> >
>


Re: KIP-800: Add reason to LeaveGroupRequest

2021-11-19 Thread David Jacot
Thank you all for your feedback. Let me address all your points below.

Luke,
1. I use a tag field so bumping the request version is not necessary. In
this case, using a tag field does not seem to be the best approach so
I will use a regular one and bump the version.
2. Sounds good. I will fix that comment.
3. That is a good question. My intent was to avoid getting weird or cryptic
reasons logged on the broker so I thought that having a standard one is
better. As Sophie suggested something similar for the `enforceRebalance`
API, we could do it for both, I suppose.

Ismael,
1. That's a good point. I chose to use a tag field to avoid having to bump
the request version. In this particular case, it seems that bumping the
version does not cost much so it is perhaps better. I will change this.

Sophie,
1. That's a pretty good idea, thanks. Let me update the KIP to include
a reason in the JoinGroup request. Regarding the Consumer API, do
you think that there is value for KStreams to expose the possibility to
provide a reason? Otherwise, it might be better to use a default
reason in this case.
2. I don't fully get your point about providing the reason to the group
leader assignor on the client. Do you think that we should propagate
it to all the consumers or to the leader as well? The user usually has
access to all its client logs so I am not sure that it is really necessary.
Could you elaborate?

I will update the KIP soon.

Best,
David

On Sat, Nov 13, 2021 at 6:00 AM Sophie Blee-Goldman
 wrote:
>
> This sounds great, thanks David.
>
> One thought: what do you think about doing something similar for the
> JoinGroup request?
>
> When you only have broker logs and not client logs, it's somewhere between
> challenging and
> impossible to determine the reason for a rebalance that was triggered
> explicitly by the client or
> even the user. For example, when a followup rebalance is requested to
> assign the revoked
> partitions after a cooperative rebalance. Or any of the many reasons we
> trigger a rebalance
> in Kafka Streams, via the #enforceRebalance API.
>
> Perhaps we could add a parameter to that method as such:
>
> public void enforceRebalance(final String reason);
>
> Then we can add that to the JoinGroup request/ConsumerProtocol. Not only
> would it help to
> log this reason on the broker side, the information about who requested the
> rebalance and why
> could be extremely useful to have available to the group leader/partition
> assignor on the client
> side.
>
> Cheers,
> Sophie
>
> On Fri, Nov 12, 2021 at 10:05 AM Ismael Juma  wrote:
>
> > Thanks David, this is a worthwhile improvement. Quick question, why did we
> > pick a tagged field here?
> >
> > Ismael
> >
> > On Thu, Nov 11, 2021, 8:32 AM David Jacot 
> > wrote:
> >
> > > Hi folks,
> > >
> > > I'd like to discuss this very small KIP which proposes to add a reason
> > > field
> > > to the LeaveGroupRequest in order to let the broker know why a member
> > > left the group. This would be really handy for administrators.
> > >
> > > KIP: https://cwiki.apache.org/confluence/x/eYyqCw
> > >
> > > Cheers,
> > > David
> > >
> >


[jira] [Created] (KAFKA-13466) when kafka-console-producer.sh, batch.size defaults is Inconsistent with official documentation

2021-11-19 Thread peterwanner (Jira)
peterwanner created KAFKA-13466:
---

 Summary: when kafka-console-producer.sh, batch.size defaults  is 
Inconsistent with official documentation
 Key: KAFKA-13466
 URL: https://issues.apache.org/jira/browse/KAFKA-13466
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: peterwanner
 Attachments: image-2021-11-19-04-05-15-754.png, 
image-2021-11-19-04-09-28-299.png

official docs:

!image-2021-11-19-04-05-15-754.png!

 

shell scripts is:

!image-2021-11-19-04-09-28-299.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] Bringing Kafka towards Scala 3

2021-11-19 Thread Josep Prat
Hi there,
I just wanted to bump this thread as there has been a great development.
Gradle 7.3 is now released and it comes with Scala 3 support (release notes
). This means that one of
the roadblocks to migrate Kafka to Scala 3 has been cleared.
Shall I write a KIP to decide if we want to migrate to Scala 3 right away
or wait until Kafka 4.0 in which Scala 2.12 will be removed?

I also had a favour to ask, could someone take a look at
https://github.com/apache/kafka/pull/11432? The purpose of the PR is to
improve some Scala code to make it easy to migrate in the future, the PR
has been sitting there for several weeks now. I would highly appreciate
some reviews in there so we can move it forward before more merge conflicts
appear.

Thanks a lot!

On Tue, Nov 2, 2021 at 9:06 AM Josep Prat  wrote:

> Hi Colin,
>
> We can take 2 different paths, both of them need a similar effort.
> — Offer Scala 3 along with Scala 2.12 and 2.13
> — Offer Scala 3 only once Scala 2.12 is removed
>
> On the draft PR I shared in the initial email (
> https://github.com/apache/kafka/pull/11350), I used the first approach,
> and it didn't require any additional work. This means we can decide to take
> any of the 2 approaches freely. Offering Scala 3 only when 2.13 is removed
> means to wait for a bit, but it's perfectly doable.
> My personal.opinion is that, given Scala 3 is a major release (epoch
> according to Scala version schema), we might want to build in Scala 3
> sooner rather than later. We could label this as experimental Scala 3
> support.
> However, until Gradle 7.3 is released, we can't proceed any further with
> building with Scala 3. Currently, Gradle 7.3 is under RC3, so I expect it
> to be available soon.
>
> Regarding the changes I propose to add right now as I mentioned in the
> first email, you can see exactly what I meant in this PR:
> https://github.com/apache/kafka/pull/11432. Sometimes, Kafka uses some
> syntax or approaches that are deprecated and they are removed in Scala 3.
> This PR goes over those and uses the "accepted" syntax. Feel free to take
> a look and review it.
>
> Thanks for your feedback Colin!
>
> Best,
> ———
> Josep Prat
>
> Aiven Deutschland GmbH
>
> Immanuelkirchstraße 26, 10405 Berlin
>
> Amtsgericht Charlottenburg, HRB 209739 B
>
> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
>
> m: +491715557497
>
> w: aiven.io
>
> e: josep.p...@aiven.io
>
> On Mon, Nov 1, 2021, 23:45 Colin McCabe  wrote:
>
>> Thanks for looking at this, Josep... I guess the way I always imagined
>> this happening was that a Scala 3 release would become one of our two
>> supported Scala versions. So instead of 2.12 and 2.13, we'd have 2.13 and
>> 3.x, for some X. Do you think that approach would be practical?
>>
>> best,
>> Colin
>>
>>
>> On Fri, Oct 8, 2021, at 09:05, Ismael Juma wrote:
>> > Yeah, changes that are good generally can be submitted any time.
>> >
>> > Ismael
>> >
>> > On Fri, Oct 8, 2021 at 7:35 AM Josep Prat 
>> > wrote:
>> >
>> >> Hi Ismael,
>> >>
>> >> Thanks for the reply. I don't think the demand is high at the moment,
>> but
>> >> it will increase overtime.
>> >> What do you think about me submitting a PR now with only the changes I
>> >> mention in group a)? They are small changes like dropping extra
>> >> parenthesis, or stopping shadowing names. The resulting code would be
>> >> cleaner Scala that can cross compile.
>> >>
>> >> Thanks again,
>> >> ———
>> >> Josep Prat
>> >>
>> >> Aiven Deutschland GmbH
>> >>
>> >> Immanuelkirchstraße 26, 10405 Berlin
>> >>
>> >> Amtsgericht Charlottenburg, HRB 209739 B
>> >>
>> >> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
>> >>
>> >> m: +491715557497
>> >>
>> >> w: aiven.io
>> >>
>> >> e: josep.p...@aiven.io
>> >>
>> >> On Fri, Oct 8, 2021, 16:22 Ismael Juma  wrote:
>> >>
>> >> > Hi Josep,
>> >> >
>> >> > Thanks for looking into this. As you said, it seems like there are a
>> >> number
>> >> > of rough edges still. Do we have many users asking for this right
>> now? If
>> >> > not, it may make sense to wait for things to get a bit more mature
>> before
>> >> > adding the burden of a third Scala version. Ideally, we would drop
>> Scala
>> >> > 2.12 before we add Scala 3 support.
>> >> >
>> >> > Ismael
>> >> >
>> >> > On Fri, Oct 8, 2021 at 6:07 AM Josep Prat
>> 
>> >> > wrote:
>> >> >
>> >> > > Hi there,
>> >> > > For the last month I was actively working on migrating Apache
>> Kafka to
>> >> > > Scala 3. For what I could see, Apache Kafka is the biggest mixed
>> >> > Java-Scala
>> >> > > project to attempt a migration to Scala 3. This means, a lot of
>> >> > regressions
>> >> > > in regards to Java and Scala interoperability at bytecode were
>> found
>> >> > within
>> >> > > this task. You can take a look at the PR here:
>> >> > > https://github.com/apache/kafka/pull/11350
>> >> > > Currently, the status of the migration is as follows:
>> >> > >
>> >> > >- All tests pass in Scala 3 except one (

Re: [ANNOUNCE] New Kafka PMC Member: Tom Bentley

2021-11-19 Thread Tom Bentley
Thanks folks!

On Fri, Nov 19, 2021 at 5:16 AM Satish Duggana 
wrote:

> Congratulations Tom!
>
> On Fri, 19 Nov 2021 at 05:53, John Roesler  wrote:
>
> > Congratulations, Tom!
> >
> > On Thu, Nov 18, 2021, at 17:53, Konstantine Karantasis wrote:
> > > Congratulations Tom!
> > >
> > > Konstantine
> > >
> > >
> > > On Thu, Nov 18, 2021 at 2:44 PM Luke Chen  wrote:
> > >
> > >> Congrats, Tom!
> > >>
> > >> Guozhang Wang  於 2021年11月19日 週五 上午1:13 寫道:
> > >>
> > >> > Congrats Tom!
> > >> >
> > >> > Guozhang
> > >> >
> > >> > On Thu, Nov 18, 2021 at 7:49 AM Jun Rao 
> > >> wrote:
> > >> >
> > >> > > Hi, Everyone,
> > >> > >
> > >> > > Tom Bentley has been a Kafka committer since Mar. 15,  2021. He
> has
> > >> been
> > >> > > very instrumental to the community since becoming a committer.
> It's
> > my
> > >> > > pleasure to announce that Tom is now a member of Kafka PMC.
> > >> > >
> > >> > > Congratulations Tom!
> > >> > >
> > >> > > Jun
> > >> > > on behalf of Apache Kafka PMC
> > >> > >
> > >> >
> > >> >
> > >> > --
> > >> > -- Guozhang
> > >> >
> > >>
> >
>