Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Boyang Chen
Congrats David, well deserved!

On Fri, Oct 16, 2020 at 6:45 PM John Roesler  wrote:

> Congratulations, David!
> -John
>
> On Fri, Oct 16, 2020, at 20:15, Konstantine Karantasis wrote:
> > Congrats, David!
> >
> > Konstantine
> >
> >
> > On Fri, Oct 16, 2020 at 3:36 PM Ismael Juma  wrote:
> >
> > > Congratulations David!
> > >
> > > Ismael
> > >
> > > On Fri, Oct 16, 2020 at 9:01 AM Gwen Shapira 
> wrote:
> > >
> > > > The PMC for Apache Kafka has invited David Jacot as a committer, and
> > > > we are excited to say that he accepted!
> > > >
> > > > David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > > > and has been very active since August 2019. He contributed several
> > > > notable KIPs:
> > > >
> > > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > > KIP-570: Add leader epoch in StopReplicaReques
> > > > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > > > Operations
> > > > KIP-496 Added an API for the deletion of consumer offsets
> > > >
> > > > In addition, David Jacot reviewed many community contributions and
> > > > showed great technical and architectural taste. Great reviews are
> hard
> > > > and often thankless work - but this is what makes Kafka a great
> > > > product and helps us grow our community.
> > > >
> > > > Thanks for all the contributions, David! Looking forward to more
> > > > collaboration in the Apache Kafka community.
> > > >
> > > > --
> > > > Gwen Shapira
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-10616) StreamThread killed by "IllegalStateException: The processor is already closed"

2020-10-16 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-10616:
---

 Summary: StreamThread killed by "IllegalStateException: The 
processor is already closed"
 Key: KAFKA-10616
 URL: https://issues.apache.org/jira/browse/KAFKA-10616
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman
 Fix For: 2.7.0


Application is hitting "java.lang.IllegalStateException: The processor is 
already closed". Over the course of about a day, this exception killed 21/100 
of the queries (StreamThreads). The (slightly trimmed) stacktrace:

 
{code:java}
java.lang.RuntimeException: Caught an exception while closing caching window 
store for store Aggregate-Aggregate-Materialize at 
org.apache.kafka.streams.state.internals.ExceptionUtils.throwSuppressed(ExceptionUtils.java:39)
 at 
org.apache.kafka.streams.state.internals.CachingWindowStore.close(CachingWindowStore.java:432)
 at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.close(ProcessorStateManager.java:527)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.closeDirty(StreamTask.java:499)
 at 
org.apache.kafka.streams.processor.internals.TaskManager.handleLostAll(TaskManager.java:626)
 … Caused by: java.lang.IllegalStateException: The processor is already closed 
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.throwIfClosed(ProcessorNode.java:172)
 at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:178)
 at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:273)
 at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:252)
 at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:214)
 at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:45)
 at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:28)
 at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$setFlushListener$1(MeteredWindowStore.java:110)
 at 
org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:118)
 at 
org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:93)
 at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:151) 
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:109) 
at 
org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:116)
 at 
org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$close$1(CachingWindowStore.java:427)
 at 
org.apache.kafka.streams.state.internals.ExceptionUtils.executeAll(ExceptionUtils.java:28)
 at 
org.apache.kafka.streams.state.internals.CachingWindowStore.close(CachingWindowStore.java:426)
{code}
 

I'm guessing we close the topology before closing the state states, so records 
that get flushed during the caching store's close() will run into an 
already-closed processor. During a clean close we should always flush before 
closing anything (during prepareCommit()), but since this was a handleLostAll() 
we would just skip right to suspend() and close the topology.

Presumably the right thing to do here is to flush the caches before closing 
anything during a dirty close.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] chia7712 opened a new pull request #304: Add chia7712 to committers

2020-10-16 Thread GitBox


chia7712 opened a new pull request #304:
URL: https://github.com/apache/kafka-site/pull/304


   hello Kafka :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread John Roesler
Congratulations, David!
-John

On Fri, Oct 16, 2020, at 20:15, Konstantine Karantasis wrote:
> Congrats, David!
> 
> Konstantine
> 
> 
> On Fri, Oct 16, 2020 at 3:36 PM Ismael Juma  wrote:
> 
> > Congratulations David!
> >
> > Ismael
> >
> > On Fri, Oct 16, 2020 at 9:01 AM Gwen Shapira  wrote:
> >
> > > The PMC for Apache Kafka has invited David Jacot as a committer, and
> > > we are excited to say that he accepted!
> > >
> > > David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > > and has been very active since August 2019. He contributed several
> > > notable KIPs:
> > >
> > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > KIP-570: Add leader epoch in StopReplicaReques
> > > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > > Operations
> > > KIP-496 Added an API for the deletion of consumer offsets
> > >
> > > In addition, David Jacot reviewed many community contributions and
> > > showed great technical and architectural taste. Great reviews are hard
> > > and often thankless work - but this is what makes Kafka a great
> > > product and helps us grow our community.
> > >
> > > Thanks for all the contributions, David! Looking forward to more
> > > collaboration in the Apache Kafka community.
> > >
> > > --
> > > Gwen Shapira
> > >
> >
>


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Konstantine Karantasis
Congrats, David!

Konstantine


On Fri, Oct 16, 2020 at 3:36 PM Ismael Juma  wrote:

> Congratulations David!
>
> Ismael
>
> On Fri, Oct 16, 2020 at 9:01 AM Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited David Jacot as a committer, and
> > we are excited to say that he accepted!
> >
> > David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > and has been very active since August 2019. He contributed several
> > notable KIPs:
> >
> > KIP-511: Collect and Expose Client Name and Version in Brokers
> > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > KIP-570: Add leader epoch in StopReplicaReques
> > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > Operations
> > KIP-496 Added an API for the deletion of consumer offsets
> >
> > In addition, David Jacot reviewed many community contributions and
> > showed great technical and architectural taste. Great reviews are hard
> > and often thankless work - but this is what makes Kafka a great
> > product and helps us grow our community.
> >
> > Thanks for all the contributions, David! Looking forward to more
> > collaboration in the Apache Kafka community.
> >
> > --
> > Gwen Shapira
> >
>


[jira] [Resolved] (KAFKA-8370) Kafka Connect should check for existence of internal topics before attempting to create them

2020-10-16 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8370.
--
Resolution: Won't Fix

As mentioned above, we never can avoid the race condition of two Connect 
workers trying to create the same topic, and it's imperative that the 
create-topic request is handled atomically and throws TopicExistsException if 
the create-topic request fails because the topic already exists. KAFKA-8875 is 
now ensuring that happens, and Connect already properly handles the case when a 
create-topic request fails with TopicExistsException

The conclusion: there is no need for the check before creating the topic, 
because that is not guaranteed to be sufficient anyway.

> Kafka Connect should check for existence of internal topics before attempting 
> to create them
> 
>
> Key: KAFKA-8370
> URL: https://issues.apache.org/jira/browse/KAFKA-8370
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
>
> The Connect worker doesn't current check for the existence of the internal 
> topics, and instead is issuing a CreateTopic request and handling a 
> TopicExistsException. However, this can cause problems when the number of 
> brokers is fewer than the replication factor, *even if the topic already 
> exists* and the partitions of those topics all remain available on the 
> remaining brokers.
> One problem of the current approach is that the broker checks the requested 
> replication factor before checking for the existence of the topic, resulting 
> in unexpected exceptions when the topic does exist:
> {noformat}
> connect  | [2019-05-14 19:24:25,166] ERROR Uncaught exception in herder 
> work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> connect  | org.apache.kafka.connect.errors.ConnectException: Error while 
> attempting to create/find topic(s) 'connect-offsets'
> connect  |at 
> org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:255)
> connect  |at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore$1.run(KafkaOffsetBackingStore.java:99)
> connect  |at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:127)
> connect  |at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:109)
> connect  |at 
> org.apache.kafka.connect.runtime.Worker.start(Worker.java:164)
> connect  |at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:114)
> connect  |at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:214)
> connect  |at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> connect  |at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> connect  |at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> connect  |at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> connect  |at java.lang.Thread.run(Thread.java:748)
> connect  | Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
> factor: 3 larger than available brokers: 2.
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> connect  |at 
> org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:228)
> connect  |... 11 more
> connect  | Caused by: 
> org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
> factor: 3 larger than available brokers: 2.
> connect  | [2019-05-14 19:24:25,168] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> {noformat}
> Instead of always issuing a CreateTopic request, the worker's admin client 
> should first check whether the topic exists, and if not *then* attempt to 
> create the topic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Ismael Juma
Congratulations David!

Ismael

On Fri, Oct 16, 2020 at 9:01 AM Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited David Jacot as a committer, and
> we are excited to say that he accepted!
>
> David Jacot has been contributing to Apache Kafka since July 2015 (!)
> and has been very active since August 2019. He contributed several
> notable KIPs:
>
> KIP-511: Collect and Expose Client Name and Version in Brokers
> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> KIP-570: Add leader epoch in StopReplicaReques
> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> Operations
> KIP-496 Added an API for the deletion of consumer offsets
>
> In addition, David Jacot reviewed many community contributions and
> showed great technical and architectural taste. Great reviews are hard
> and often thankless work - but this is what makes Kafka a great
> product and helps us grow our community.
>
> Thanks for all the contributions, David! Looking forward to more
> collaboration in the Apache Kafka community.
>
> --
> Gwen Shapira
>


Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #17

2020-10-16 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10600: Connect should not add error to connector 
validation values for properties not in connector’s ConfigDef (#9425)


--
[...truncated 3.12 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED


Re: [VOTE] KIP-661: Expose task configurations in Connect REST API

2020-10-16 Thread Gwen Shapira
I definitely needed this capability a few times before. Thank you.

+1 (binding)

On Thu, Sep 24, 2020 at 7:54 AM Mickael Maison  wrote:
>
> Hi,
>
> I'd like to start a vote on KIP-661:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-661%3A+Expose+task+configurations+in+Connect+REST+API
>
> Thanks



-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #151

2020-10-16 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix consumer/producer properties override (#9313)


--
[...truncated 3.42 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@57a1061e,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@57a1061e,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7d1f86be,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7d1f86be,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@171f2276,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@171f2276,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@924841c, 
timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@924841c, 
timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@566ae2d0,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@566ae2d0,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5741a4d5,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5741a4d5,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@392320d9,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@392320d9,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@140a5785,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@140a5785,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@15f75bb6, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@15f75bb6, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4f4a32d1, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 

Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Steve Rodrigues
Congratulations David!

Sent via passenger pigeon, please ignore droppings


> On Oct 16, 2020, at 9:01 AM, Gwen Shapira  wrote:
> 
> The PMC for Apache Kafka has invited David Jacot as a committer, and
> we are excited to say that he accepted!
> 
> David Jacot has been contributing to Apache Kafka since July 2015 (!)
> and has been very active since August 2019. He contributed several
> notable KIPs:
> 
> KIP-511: Collect and Expose Client Name and Version in Brokers
> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> KIP-570: Add leader epoch in StopReplicaReques
> KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations
> KIP-496 Added an API for the deletion of consumer offsets
> 
> In addition, David Jacot reviewed many community contributions and
> showed great technical and architectural taste. Great reviews are hard
> and often thankless work - but this is what makes Kafka a great
> product and helps us grow our community.
> 
> Thanks for all the contributions, David! Looking forward to more
> collaboration in the Apache Kafka community.
> 
> -- 
> Gwen Shapira


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Jakub Scholz
Congrats David!

Jakub

On Fri, Oct 16, 2020 at 6:01 PM Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited David Jacot as a committer, and
> we are excited to say that he accepted!
>
> David Jacot has been contributing to Apache Kafka since July 2015 (!)
> and has been very active since August 2019. He contributed several
> notable KIPs:
>
> KIP-511: Collect and Expose Client Name and Version in Brokers
> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> KIP-570: Add leader epoch in StopReplicaReques
> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> Operations
> KIP-496 Added an API for the deletion of consumer offsets
>
> In addition, David Jacot reviewed many community contributions and
> showed great technical and architectural taste. Great reviews are hard
> and often thankless work - but this is what makes Kafka a great
> product and helps us grow our community.
>
> Thanks for all the contributions, David! Looking forward to more
> collaboration in the Apache Kafka community.
>
> --
> Gwen Shapira
>


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Guozhang Wang
Congrats David!

Guozhang


On Fri, Oct 16, 2020 at 10:23 AM Raymond Ng  wrote:

> Congrats David!
>
> Cheers,
> Ray
>
> On Fri, Oct 16, 2020 at 10:08 AM Rajini Sivaram 
> wrote:
>
> > Congratulations, David!
> >
> > Regards,
> >
> > Rajini
> >
> > On Fri, Oct 16, 2020 at 5:45 PM Matthias J. Sax 
> wrote:
> >
> > > Congrats!
> > >
> > > On 10/16/20 9:25 AM, Tom Bentley wrote:
> > > > Congratulations David!
> > > >
> > > > On Fri, Oct 16, 2020 at 5:10 PM Bill Bejeck 
> wrote:
> > > >
> > > >> Congrats David! Well deserved.
> > > >>
> > > >> -Bill
> > > >>
> > > >> On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira 
> > > wrote:
> > > >>
> > > >>> The PMC for Apache Kafka has invited David Jacot as a committer,
> and
> > > >>> we are excited to say that he accepted!
> > > >>>
> > > >>> David Jacot has been contributing to Apache Kafka since July 2015
> (!)
> > > >>> and has been very active since August 2019. He contributed several
> > > >>> notable KIPs:
> > > >>>
> > > >>> KIP-511: Collect and Expose Client Name and Version in Brokers
> > > >>> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > >>> KIP-570: Add leader epoch in StopReplicaReques
> > > >>> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > > >>> Operations
> > > >>> KIP-496 Added an API for the deletion of consumer offsets
> > > >>>
> > > >>> In addition, David Jacot reviewed many community contributions and
> > > >>> showed great technical and architectural taste. Great reviews are
> > hard
> > > >>> and often thankless work - but this is what makes Kafka a great
> > > >>> product and helps us grow our community.
> > > >>>
> > > >>> Thanks for all the contributions, David! Looking forward to more
> > > >>> collaboration in the Apache Kafka community.
> > > >>>
> > > >>> --
> > > >>> Gwen Shapira
> > > >>>
> > > >>
> > > >
> > >
> >
>


-- 
-- Guozhang


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #36

2020-10-16 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10600: Connect should not add error to connector 
validation values for properties not in connector’s ConfigDef (#9425)


--
[...truncated 3.16 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Raymond Ng
Congrats David!

Cheers,
Ray

On Fri, Oct 16, 2020 at 10:08 AM Rajini Sivaram 
wrote:

> Congratulations, David!
>
> Regards,
>
> Rajini
>
> On Fri, Oct 16, 2020 at 5:45 PM Matthias J. Sax  wrote:
>
> > Congrats!
> >
> > On 10/16/20 9:25 AM, Tom Bentley wrote:
> > > Congratulations David!
> > >
> > > On Fri, Oct 16, 2020 at 5:10 PM Bill Bejeck  wrote:
> > >
> > >> Congrats David! Well deserved.
> > >>
> > >> -Bill
> > >>
> > >> On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira 
> > wrote:
> > >>
> > >>> The PMC for Apache Kafka has invited David Jacot as a committer, and
> > >>> we are excited to say that he accepted!
> > >>>
> > >>> David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > >>> and has been very active since August 2019. He contributed several
> > >>> notable KIPs:
> > >>>
> > >>> KIP-511: Collect and Expose Client Name and Version in Brokers
> > >>> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > >>> KIP-570: Add leader epoch in StopReplicaReques
> > >>> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > >>> Operations
> > >>> KIP-496 Added an API for the deletion of consumer offsets
> > >>>
> > >>> In addition, David Jacot reviewed many community contributions and
> > >>> showed great technical and architectural taste. Great reviews are
> hard
> > >>> and often thankless work - but this is what makes Kafka a great
> > >>> product and helps us grow our community.
> > >>>
> > >>> Thanks for all the contributions, David! Looking forward to more
> > >>> collaboration in the Apache Kafka community.
> > >>>
> > >>> --
> > >>> Gwen Shapira
> > >>>
> > >>
> > >
> >
>


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Rajini Sivaram
Congratulations, David!

Regards,

Rajini

On Fri, Oct 16, 2020 at 5:45 PM Matthias J. Sax  wrote:

> Congrats!
>
> On 10/16/20 9:25 AM, Tom Bentley wrote:
> > Congratulations David!
> >
> > On Fri, Oct 16, 2020 at 5:10 PM Bill Bejeck  wrote:
> >
> >> Congrats David! Well deserved.
> >>
> >> -Bill
> >>
> >> On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira 
> wrote:
> >>
> >>> The PMC for Apache Kafka has invited David Jacot as a committer, and
> >>> we are excited to say that he accepted!
> >>>
> >>> David Jacot has been contributing to Apache Kafka since July 2015 (!)
> >>> and has been very active since August 2019. He contributed several
> >>> notable KIPs:
> >>>
> >>> KIP-511: Collect and Expose Client Name and Version in Brokers
> >>> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> >>> KIP-570: Add leader epoch in StopReplicaReques
> >>> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> >>> Operations
> >>> KIP-496 Added an API for the deletion of consumer offsets
> >>>
> >>> In addition, David Jacot reviewed many community contributions and
> >>> showed great technical and architectural taste. Great reviews are hard
> >>> and often thankless work - but this is what makes Kafka a great
> >>> product and helps us grow our community.
> >>>
> >>> Thanks for all the contributions, David! Looking forward to more
> >>> collaboration in the Apache Kafka community.
> >>>
> >>> --
> >>> Gwen Shapira
> >>>
> >>
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #150

2020-10-16 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Use debug level logging for noisy log messages in Connect 
(#8918)

[github] KAFKA-10600: Connect should not add error to connector validation 
values for properties not in connector’s ConfigDef (#9425)


--
[...truncated 3.43 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Rankesh Kumar
Many congratulations,  David. It is awesome.

Best regards,
Rankesh

> On 16-Oct-2020, at 9:51 PM, Mickael Maison  wrote:
> 
> Congratulations David!
> 
> On Fri, Oct 16, 2020 at 6:05 PM Bill Bejeck  wrote:
>> 
>> Congrats David! Well deserved.
>> 
>> -Bill
>> 
>> On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira  wrote:
>> 
>>> The PMC for Apache Kafka has invited David Jacot as a committer, and
>>> we are excited to say that he accepted!
>>> 
>>> David Jacot has been contributing to Apache Kafka since July 2015 (!)
>>> and has been very active since August 2019. He contributed several
>>> notable KIPs:
>>> 
>>> KIP-511: Collect and Expose Client Name and Version in Brokers
>>> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
>>> KIP-570: Add leader epoch in StopReplicaReques
>>> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
>>> Operations
>>> KIP-496 Added an API for the deletion of consumer offsets
>>> 
>>> In addition, David Jacot reviewed many community contributions and
>>> showed great technical and architectural taste. Great reviews are hard
>>> and often thankless work - but this is what makes Kafka a great
>>> product and helps us grow our community.
>>> 
>>> Thanks for all the contributions, David! Looking forward to more
>>> collaboration in the Apache Kafka community.
>>> 
>>> --
>>> Gwen Shapira
>>> 



Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Matthias J. Sax
Congrats!

On 10/16/20 9:25 AM, Tom Bentley wrote:
> Congratulations David!
> 
> On Fri, Oct 16, 2020 at 5:10 PM Bill Bejeck  wrote:
> 
>> Congrats David! Well deserved.
>>
>> -Bill
>>
>> On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira  wrote:
>>
>>> The PMC for Apache Kafka has invited David Jacot as a committer, and
>>> we are excited to say that he accepted!
>>>
>>> David Jacot has been contributing to Apache Kafka since July 2015 (!)
>>> and has been very active since August 2019. He contributed several
>>> notable KIPs:
>>>
>>> KIP-511: Collect and Expose Client Name and Version in Brokers
>>> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
>>> KIP-570: Add leader epoch in StopReplicaReques
>>> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
>>> Operations
>>> KIP-496 Added an API for the deletion of consumer offsets
>>>
>>> In addition, David Jacot reviewed many community contributions and
>>> showed great technical and architectural taste. Great reviews are hard
>>> and often thankless work - but this is what makes Kafka a great
>>> product and helps us grow our community.
>>>
>>> Thanks for all the contributions, David! Looking forward to more
>>> collaboration in the Apache Kafka community.
>>>
>>> --
>>> Gwen Shapira
>>>
>>
> 


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Lucas Bradstreet
Grats David!

On Fri, Oct 16, 2020 at 9:25 AM Tom Bentley  wrote:

> Congratulations David!
>
> On Fri, Oct 16, 2020 at 5:10 PM Bill Bejeck  wrote:
>
> > Congrats David! Well deserved.
> >
> > -Bill
> >
> > On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira  wrote:
> >
> > > The PMC for Apache Kafka has invited David Jacot as a committer, and
> > > we are excited to say that he accepted!
> > >
> > > David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > > and has been very active since August 2019. He contributed several
> > > notable KIPs:
> > >
> > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > KIP-570: Add leader epoch in StopReplicaReques
> > > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > > Operations
> > > KIP-496 Added an API for the deletion of consumer offsets
> > >
> > > In addition, David Jacot reviewed many community contributions and
> > > showed great technical and architectural taste. Great reviews are hard
> > > and often thankless work - but this is what makes Kafka a great
> > > product and helps us grow our community.
> > >
> > > Thanks for all the contributions, David! Looking forward to more
> > > collaboration in the Apache Kafka community.
> > >
> > > --
> > > Gwen Shapira
> > >
> >
>


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Tom Bentley
Congratulations David!

On Fri, Oct 16, 2020 at 5:10 PM Bill Bejeck  wrote:

> Congrats David! Well deserved.
>
> -Bill
>
> On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited David Jacot as a committer, and
> > we are excited to say that he accepted!
> >
> > David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > and has been very active since August 2019. He contributed several
> > notable KIPs:
> >
> > KIP-511: Collect and Expose Client Name and Version in Brokers
> > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > KIP-570: Add leader epoch in StopReplicaReques
> > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > Operations
> > KIP-496 Added an API for the deletion of consumer offsets
> >
> > In addition, David Jacot reviewed many community contributions and
> > showed great technical and architectural taste. Great reviews are hard
> > and often thankless work - but this is what makes Kafka a great
> > product and helps us grow our community.
> >
> > Thanks for all the contributions, David! Looking forward to more
> > collaboration in the Apache Kafka community.
> >
> > --
> > Gwen Shapira
> >
>


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Kowshik Prakasam
Congrats David!


Cheers,
Kowshik

On Fri, Oct 16, 2020, 9:21 AM Mickael Maison 
wrote:

> Congratulations David!
>
> On Fri, Oct 16, 2020 at 6:05 PM Bill Bejeck  wrote:
> >
> > Congrats David! Well deserved.
> >
> > -Bill
> >
> > On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira  wrote:
> >
> > > The PMC for Apache Kafka has invited David Jacot as a committer, and
> > > we are excited to say that he accepted!
> > >
> > > David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > > and has been very active since August 2019. He contributed several
> > > notable KIPs:
> > >
> > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > KIP-570: Add leader epoch in StopReplicaReques
> > > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > > Operations
> > > KIP-496 Added an API for the deletion of consumer offsets
> > >
> > > In addition, David Jacot reviewed many community contributions and
> > > showed great technical and architectural taste. Great reviews are hard
> > > and often thankless work - but this is what makes Kafka a great
> > > product and helps us grow our community.
> > >
> > > Thanks for all the contributions, David! Looking forward to more
> > > collaboration in the Apache Kafka community.
> > >
> > > --
> > > Gwen Shapira
> > >
>


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Mickael Maison
Congratulations David!

On Fri, Oct 16, 2020 at 6:05 PM Bill Bejeck  wrote:
>
> Congrats David! Well deserved.
>
> -Bill
>
> On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited David Jacot as a committer, and
> > we are excited to say that he accepted!
> >
> > David Jacot has been contributing to Apache Kafka since July 2015 (!)
> > and has been very active since August 2019. He contributed several
> > notable KIPs:
> >
> > KIP-511: Collect and Expose Client Name and Version in Brokers
> > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > KIP-570: Add leader epoch in StopReplicaReques
> > KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> > Operations
> > KIP-496 Added an API for the deletion of consumer offsets
> >
> > In addition, David Jacot reviewed many community contributions and
> > showed great technical and architectural taste. Great reviews are hard
> > and often thankless work - but this is what makes Kafka a great
> > product and helps us grow our community.
> >
> > Thanks for all the contributions, David! Looking forward to more
> > collaboration in the Apache Kafka community.
> >
> > --
> > Gwen Shapira
> >


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Bill Bejeck
Congrats David! Well deserved.

-Bill

On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited David Jacot as a committer, and
> we are excited to say that he accepted!
>
> David Jacot has been contributing to Apache Kafka since July 2015 (!)
> and has been very active since August 2019. He contributed several
> notable KIPs:
>
> KIP-511: Collect and Expose Client Name and Version in Brokers
> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> KIP-570: Add leader epoch in StopReplicaReques
> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> Operations
> KIP-496 Added an API for the deletion of consumer offsets
>
> In addition, David Jacot reviewed many community contributions and
> showed great technical and architectural taste. Great reviews are hard
> and often thankless work - but this is what makes Kafka a great
> product and helps us grow our community.
>
> Thanks for all the contributions, David! Looking forward to more
> collaboration in the Apache Kafka community.
>
> --
> Gwen Shapira
>


[ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Gwen Shapira
The PMC for Apache Kafka has invited David Jacot as a committer, and
we are excited to say that he accepted!

David Jacot has been contributing to Apache Kafka since July 2015 (!)
and has been very active since August 2019. He contributed several
notable KIPs:

KIP-511: Collect and Expose Client Name and Version in Brokers
KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
KIP-570: Add leader epoch in StopReplicaReques
KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations
KIP-496 Added an API for the deletion of consumer offsets

In addition, David Jacot reviewed many community contributions and
showed great technical and architectural taste. Great reviews are hard
and often thankless work - but this is what makes Kafka a great
product and helps us grow our community.

Thanks for all the contributions, David! Looking forward to more
collaboration in the Apache Kafka community.

-- 
Gwen Shapira


Re: [VOTE] KIP-516: Topic Identifiers

2020-10-16 Thread Rajini Sivaram
Hi Justine,

+1 (binding)

Thanks for all the work you put into this KIP!

btw, there is a typo in the DeleteTopics Request/Response schema in the
KIP, it says Metadata request.

Regards,

Rajini


On Fri, Oct 16, 2020 at 4:06 PM Satish Duggana 
wrote:

> Hi Justine,
> Thanks for the KIP,  +1 (non-binding)
>
> On Thu, Oct 15, 2020 at 10:48 PM Lucas Bradstreet 
> wrote:
> >
> > Hi Justine,
> >
> > +1 (non-binding). Thanks for all your hard work on this KIP!
> >
> > Lucas
> >
> > On Wed, Oct 14, 2020 at 8:59 AM Jun Rao  wrote:
> >
> > > Hi, Justine,
> > >
> > > Thanks for the updated KIP. +1 from me.
> > >
> > > Jun
> > >
> > > On Tue, Oct 13, 2020 at 2:38 PM Jun Rao  wrote:
> > >
> > > > Hi, Justine,
> > > >
> > > > Thanks for starting the vote. Just a few minor comments.
> > > >
> > > > 1. It seems that we should remove the topic field from the
> > > > StopReplicaResponse below?
> > > > StopReplica Response (Version: 4) => error_code [topics]
> > > >   error_code => INT16
> > > > topics => topic topic_id* [partitions]
> > > >
> > > > 2. "After controller election, upon receiving the result, assign the
> > > > metadata topic its unique topic ID". Will the UUID for the metadata
> topic
> > > > be written to the metadata topic itself?
> > > >
> > > > 3. The vote request is designed to support multiple topics, each of
> them
> > > > may require a different sentinel ID. Should we reserve more than one
> > > > sentinel ID for future usage?
> > > >
> > > > 4. UUID.randomUUID(): Could we clarify whether this method returns
> any
> > > > sentinel ID? Also, how do we expect the user to use it?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Mon, Oct 12, 2020 at 9:54 AM Justine Olshan  >
> > > > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> After further discussion and changes to this KIP, I think we are
> ready
> > > to
> > > >> restart this vote.
> > > >>
> > > >> Again, here is the KIP:
> > > >>
> > > >>
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
> > > >>
> > > >> The discussion thread is here:
> > > >>
> > > >>
> > >
> https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
> > > >>
> > > >> Please take a look and vote if you have a chance.
> > > >>
> > > >> Thanks,
> > > >> Justine
> > > >>
> > > >> On Tue, Sep 22, 2020 at 8:52 AM Justine Olshan <
> jols...@confluent.io>
> > > >> wrote:
> > > >>
> > > >> > Hi all,
> > > >> >
> > > >> > I'd like to call a vote on KIP-516: Topic Identifiers. Here is the
> > > KIP:
> > > >> >
> > > >> >
> > > >>
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
> > > >> >
> > > >> > The discussion thread is here:
> > > >> >
> > > >> >
> > > >>
> > >
> https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
> > > >> >
> > > >> > Please take a look and vote if you have a chance.
> > > >> >
> > > >> > Thank you,
> > > >> > Justine
> > > >> >
> > > >>
> > > >
> > >
>


[GitHub] [kafka-site] dajac merged pull request #303: Add dajac to committers

2020-10-16 Thread GitBox


dajac merged pull request #303:
URL: https://github.com/apache/kafka-site/pull/303


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] KIP-516: Topic Identifiers

2020-10-16 Thread Satish Duggana
Hi Justine,
Thanks for the KIP,  +1 (non-binding)

On Thu, Oct 15, 2020 at 10:48 PM Lucas Bradstreet  wrote:
>
> Hi Justine,
>
> +1 (non-binding). Thanks for all your hard work on this KIP!
>
> Lucas
>
> On Wed, Oct 14, 2020 at 8:59 AM Jun Rao  wrote:
>
> > Hi, Justine,
> >
> > Thanks for the updated KIP. +1 from me.
> >
> > Jun
> >
> > On Tue, Oct 13, 2020 at 2:38 PM Jun Rao  wrote:
> >
> > > Hi, Justine,
> > >
> > > Thanks for starting the vote. Just a few minor comments.
> > >
> > > 1. It seems that we should remove the topic field from the
> > > StopReplicaResponse below?
> > > StopReplica Response (Version: 4) => error_code [topics]
> > >   error_code => INT16
> > > topics => topic topic_id* [partitions]
> > >
> > > 2. "After controller election, upon receiving the result, assign the
> > > metadata topic its unique topic ID". Will the UUID for the metadata topic
> > > be written to the metadata topic itself?
> > >
> > > 3. The vote request is designed to support multiple topics, each of them
> > > may require a different sentinel ID. Should we reserve more than one
> > > sentinel ID for future usage?
> > >
> > > 4. UUID.randomUUID(): Could we clarify whether this method returns any
> > > sentinel ID? Also, how do we expect the user to use it?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Mon, Oct 12, 2020 at 9:54 AM Justine Olshan 
> > > wrote:
> > >
> > >> Hi all,
> > >>
> > >> After further discussion and changes to this KIP, I think we are ready
> > to
> > >> restart this vote.
> > >>
> > >> Again, here is the KIP:
> > >>
> > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
> > >>
> > >> The discussion thread is here:
> > >>
> > >>
> > https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
> > >>
> > >> Please take a look and vote if you have a chance.
> > >>
> > >> Thanks,
> > >> Justine
> > >>
> > >> On Tue, Sep 22, 2020 at 8:52 AM Justine Olshan 
> > >> wrote:
> > >>
> > >> > Hi all,
> > >> >
> > >> > I'd like to call a vote on KIP-516: Topic Identifiers. Here is the
> > KIP:
> > >> >
> > >> >
> > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
> > >> >
> > >> > The discussion thread is here:
> > >> >
> > >> >
> > >>
> > https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
> > >> >
> > >> > Please take a look and vote if you have a chance.
> > >> >
> > >> > Thank you,
> > >> > Justine
> > >> >
> > >>
> > >
> >


[GitHub] [kafka-site] dajac opened a new pull request #303: Add dajac to committers

2020-10-16 Thread GitBox


dajac opened a new pull request #303:
URL: https://github.com/apache/kafka-site/pull/303


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #176

2020-10-16 Thread Apache Jenkins Server
See