Jenkins build is back to normal : kafka-trunk-jdk10 #608

2018-10-09 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3088

2018-10-09 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-7215: Improve LogCleaner Error Handling (#5439)

[lindong28] MINOR: Fix LogDirFailureTest flake

[rajinisivaram] KAFKA-7478: Reduce default logging verbosity in 
OAuthBearerLoginModule

[mjsax] MINOR: Fix small spelling error (#5760)

--
[...truncated 2.74 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] PASSED

org.

Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-10-09 Thread Adam Bellemare
Hello Contributors

I know that 2.1 is about to be released, but I do need to bump this to keep
visibility up. I am still intending to push this through once contributor
feedback is given.

Main points that need addressing:
1) Any way (or benefit) in structuring the current singular graph node into
multiple nodes? It has a whopping 25 parameters right now. I am a bit fuzzy
on how the optimizations are supposed to work, so I would appreciate any
help on this aspect.

2) Overall strategy for joining + resolving. This thread has much discourse
between Jan and I between the current highwater mark proposal and a groupBy
+ reduce proposal. I am of the opinion that we need to strictly handle any
chance of out-of-order data and leave none of it up to the consumer. Any
comments or suggestions here would also help.

3) Anything else that you see that would prevent this from moving to a vote?

Thanks

Adam







On Sun, Sep 30, 2018 at 10:23 AM Adam Bellemare 
wrote:

> Hi Jan
>
> With the Stores.windowStoreBuilder and Stores.persistentWindowStore, you
> actually only need to specify the amount of segments you want and how large
> they are. To the best of my understanding, what happens is that the
> segments are automatically rolled over as new data with new timestamps are
> created. We use this exact functionality in some of the work done
> internally at my company. For reference, this is the hopping windowed store.
>
> https://kafka.apache.org/11/documentation/streams/developer-guide/dsl-api.html#id21
>
> In the code that I have provided, there are going to be two 24h segments.
> When a record is put into the windowStore, it will be inserted at time T in
> both segments. The two segments will always overlap by 12h. As time goes on
> and new records are added (say at time T+12h+), the oldest segment will be
> automatically deleted and a new segment created. The records are by default
> inserted with the context.timestamp(), such that it is the record time, not
> the clock time, which is used.
>
> To the best of my understanding, the timestamps are retained when
> restoring from the changelog.
>
> Basically, this is heavy-handed way to deal with TTL at a segment-level,
> instead of at an individual record level.
>
> On Tue, Sep 25, 2018 at 5:18 PM Jan Filipiak 
> wrote:
>
>> Will that work? I expected it to blow up with ClassCastException or
>> similar.
>>
>> You either would have to specify the window you fetch/put or iterate
>> across all windows the key was found in right?
>>
>> I just hope the window-store doesn't check stream-time under the hoods
>> that would be a questionable interface.
>>
>> If it does: did you see my comment on checking all the windows earlier?
>> that would be needed to actually give reasonable time gurantees.
>>
>> Best
>>
>>
>>
>> On 25.09.2018 13:18, Adam Bellemare wrote:
>> > Hi Jan
>> >
>> > Check for  " highwaterMat " in the PR. I only changed the state store,
>> not
>> > the ProcessorSupplier.
>> >
>> > Thanks,
>> > Adam
>> >
>> > On Mon, Sep 24, 2018 at 2:47 PM, Jan Filipiak > >
>> > wrote:
>> >
>> >>
>> >>
>> >> On 24.09.2018 16:26, Adam Bellemare wrote:
>> >>
>> >>> @Guozhang
>> >>>
>> >>> Thanks for the information. This is indeed something that will be
>> >>> extremely
>> >>> useful for this KIP.
>> >>>
>> >>> @Jan
>> >>> Thanks for your explanations. That being said, I will not be moving
>> ahead
>> >>> with an implementation using reshuffle/groupBy solution as you
>> propose.
>> >>> That being said, if you wish to implement it yourself off of my
>> current PR
>> >>> and submit it as a competitive alternative, I would be more than
>> happy to
>> >>> help vet that as an alternate solution. As it stands right now, I do
>> not
>> >>> really have more time to invest into alternatives without there being
>> a
>> >>> strong indication from the binding voters which they would prefer.
>> >>>
>> >>>
>> >> Hey, total no worries. I think I personally gave up on the streams DSL
>> for
>> >> some time already, otherwise I would have pulled this KIP through
>> already.
>> >> I am currently reimplementing my own DSL based on PAPI.
>> >>
>> >>
>> >>> I will look at finishing up my PR with the windowed state store in the
>> >>> next
>> >>> week or so, exercising it via tests, and then I will come back for
>> final
>> >>> discussions. In the meantime, I hope that any of the binding voters
>> could
>> >>> take a look at the KIP in the wiki. I have updated it according to the
>> >>> latest plan:
>> >>>
>> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+
>> >>> Support+non-key+joining+in+KTable
>> >>>
>> >>> I have also updated the KIP PR to use a windowed store. This could be
>> >>> replaced by the results of KIP-258 whenever they are completed.
>> >>> https://github.com/apache/kafka/pull/5527
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Adam
>> >>>
>> >>
>> >> Is the HighWatermarkResolverProccessorsupplier already updated in the
>> PR?
>> >> expected it to change to Windowed,Long Missing so

Using Kafka/Queue for replacement of RPC, is it anti-pattern? Why?

2018-10-09 Thread Savankumar Gudaas
Hello Kafka Devs!!

I have a strange question!!
 I wanna listen your opinion on replacement of HTTP/gRPC with bi-directional 
Kafka. With minimal changes to applications.(If possibilitie, hide from 
application developer by library)
Is it anti pattern? Why?
What are advantages? why?
Anyone ever tried?

Looking forward for your opinion 😊

Thanks
Savan



Get Outlook for iOS


Re: [DISCUSS] KIP-377: TopicCommand to use AdminClient

2018-10-09 Thread Viktor Somogyi-Vass
Hi All,

Would like to bump this as the conversation sank a little bit, but more
importantly I'd like to validate my plans/ideas on extending the Metadata
protocol. I was thinking about two other alternatives, namely:
1. Create a ListTopicUnderDeletion protocol. This however would be
unnecessary: it'd have one very narrow functionality which we can't extend.
I'd make sense to have a list topics or describe topics protocol where we
can list/describe topics under deletion but for normal listing/describing
we already use the metadata, so it would be a duplication of functionality.
2. DeleteTopicsResponse could return the topics under deletion if the
request's argument list is empty which might make sense at the first look,
but actually we'd mix the query functionality with the delete functionality
which is counterintuitive.

Even though most clients won't need these "limbo" topics (which are under
deletion) in the foreseeable future, it can be considered as part of the
cluster state or metadata and to me it makes sense. Also it doesn't have a
big overhead in the response size as typically users don't delete topics
too often as far as I experienced.

I'd be happy to receive some ideas/feedback on this.

Cheers,
Viktor


On Fri, Sep 28, 2018 at 4:51 PM Viktor Somogyi-Vass 
wrote:

> Hi All,
>
> I made an update to the KIP. Just in short:
> Currently KafkaAdminClient.describeTopics() and
> KafkaAdminClient.listTopics() uses the Metadata protocol to acquire topic
> information. The returned response however won't contain the topics that
> are under deletion but couldn't complete yet (for instance because of some
> replicas offline), therefore it is not possible to implement the current
> command's "marked for deletion" feature. To get around this I introduced
> some changes in the Metadata protocol.
>
> Thanks,
> Viktor
>
> On Fri, Sep 28, 2018 at 4:48 PM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com> wrote:
>
>> Hi Mickael,
>>
>> Thanks for the feedback, I also think that many customers wanted this for
>> a long time.
>>
>> Cheers,
>> Viktor
>>
>> On Fri, Sep 28, 2018 at 11:45 AM Mickael Maison 
>> wrote:
>>
>>> Hi Viktor,
>>> Thanks for taking this task!
>>> This is a very nice change as it will allow users to use this tool in
>>> many Cloud environments where direct zookeeper access is not
>>> available.
>>>
>>>
>>> On Thu, Sep 27, 2018 at 10:34 AM Viktor Somogyi-Vass
>>>  wrote:
>>> >
>>> > Hi All,
>>> >
>>> > This is the continuation of the old KIP-375 with the same title:
>>> >
>>> https://lists.apache.org/thread.html/dc71d08de8cd2f082765be22c9f88bc9f8b39bb8e0929a3a4394e9da@%3Cdev.kafka.apache.org%3E
>>> >
>>> > The problem there was that two KIPs were created around the same time
>>> and I
>>> > chose to reorganize mine a bit and give it a new number to avoid
>>> > duplication.
>>> >
>>> > The KIP summary here once again:
>>> >
>>> > I wrote up a relatively simple KIP about improving the Kafka protocol
>>> and
>>> > the TopicCommand tool to support the new Java based AdminClient and
>>> > hopefully to deprecate the Zookeeper side of it.
>>> >
>>> > I would be happy to receive some opinions about this. In general I
>>> think
>>> > this would be an important addition as this is one of the few left but
>>> > important tools that still uses direct Zookeeper connection.
>>> >
>>> > Here is the link for the KIP:
>>> >
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-377%3A+TopicCommand+to+use+AdminClient
>>> >
>>> > Cheers,
>>> > Viktor
>>>
>>


[jira] [Resolved] (KAFKA-7198) Enhance KafkaStreams start method javadoc

2018-10-09 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7198.

Resolution: Fixed

> Enhance KafkaStreams start method javadoc
> -
>
> Key: KAFKA-7198
> URL: https://issues.apache.org/jira/browse/KAFKA-7198
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Kamal Chandraprakash
>Priority: Major
>  Labels: newbie
> Fix For: 1.0.3, 1.1.2, 2.0.1, 2.1.0
>
>
> The {{KafkaStreams.start}} method javadoc states that once called the streams 
> threads are started in the background hence the method does not block.  
> However you have GlobalKTables in your topology, the threads aren't started 
> until the GlobalKTables bootstrap fully so the javadoc for the {{start}} 
> method should be updated to reflect this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #609

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
error: Could not read a9692ff66fccc96ccf95526682136cddb5af0627
remote: Enumerating objects: 3591, done.
remote: Counting objects:   0% (1/3591)   remote: Counting objects:   
1% (36/3591)   remote: Counting objects:   2% (72/3591)   
remote: Counting objects:   3% (108/3591)   remote: Counting objects:   
4% (144/3591)   remote: Counting objects:   5% (180/3591)   
remote: Counting objects:   6% (216/3591)   remote: Counting objects:   
7% (252/3591)   remote: Counting objects:   8% (288/3591)   
remote: Counting objects:   9% (324/3591)   remote: Counting objects:  
10% (360/3591)   remote: Counting objects:  11% (396/3591)   
remote: Counting objects:  12% (431/3591)   remote: Counting objects:  
13% (467/3591)   remote: Counting objects:  14% (503/3591)   
remote: Counting objects:  15% (539/3591)   remote: Counting objects:  
16% (575/3591)   remote: Counting objects:  17% (611/3591)   
remote: Counting objects:  18% (647/3591)   remote: Counting objects:  
19% (683/3591)   remote: Counting objects:  20% (719/3591)   
remote: Counting objects:  21% (755/3591)   remote: Counting objects:  
22% (791/3591)   remote: Counting objects:  23% (826/3591)   
remote: Counting objects:  24% (862/3591)   remote: Counting objects:  
25% (898/3591)   remote: Counting objects:  26% (934/3591)   
remote: Counting objects:  27% (970/3591)   remote: Counting objects:  
28% (1006/3591)   remote: Counting objects:  29% (1042/3591)   
remote: Counting objects:  30% (1078/3591)   remote: Counting objects:  
31% (1114/3591)   remote: Counting objects:  32% (1150/3591)   
remote: Counting objects:  33% (1186/3591)   remote: Counting objects:  
34% (1221/3591)   remote: Counting objects:  35% (1257/3591)   
remote: Counting objects:  36% (1293/3591)   remote: Counting objects:  
37% (1329/3591)   remote: Counting objects:  38% (1365/3591)   
remote: Counting objects:  39% (1401/3591)   remote: Counting objects:  
40% (1437/3591)   remote: Counting objects:  41% (1473/3591)   
remote: Counting objects:  42% (1509/3591)   remote: Counting objects:  
43% (1545/3591)   remote: Counting objects:  44% (1581/3591)   
remote: Counting objects:  45% (1616/3591)   remote: Counting objects:  
46% (1652/3591)   remote: Counting objects:  47% (1688/3591)   
remote: Counting objects:  48% (1724/3591)   remote: Counting objects:  
49% (1760/3591)   remote: Counting objects:  50% (1796/3591)   
remote: Counting objects:  51% (1832/3591)   remote: Counting objects:  
52% (1868/3591)   remote: Counting objects:  53% (1904/3591)   
remote: Counting objects:  54% (1940/3591)   remote: Counting objects:  

Build failed in Jenkins: kafka-1.1-jdk7 #221

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H29 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
remote: Enumerating objects: 3389, done.
remote: Counting objects:   0% (1/3389)   remote: Counting objects:   
1% (34/3389)   remote: Counting objects:   2% (68/3389)   
remote: Counting objects:   3% (102/3389)   remote: Counting objects:   
4% (136/3389)   remote: Counting objects:   5% (170/3389)   
remote: Counting objects:   6% (204/3389)   remote: Counting objects:   
7% (238/3389)   remote: Counting objects:   8% (272/3389)   
remote: Counting objects:   9% (306/3389)   remote: Counting objects:  
10% (339/3389)   remote: Counting objects:  11% (373/3389)   
remote: Counting objects:  12% (407/3389)   remote: Counting objects:  
13% (441/3389)   remote: Counting objects:  14% (475/3389)   
remote: Counting objects:  15% (509/3389)   remote: Counting objects:  
16% (543/3389)   remote: Counting objects:  17% (577/3389)   
remote: Counting objects:  18% (611/3389)   remote: Counting objects:  
19% (644/3389)   remote: Counting objects:  20% (678/3389)   
remote: Counting objects:  21% (712/3389)   remote: Counting objects:  
22% (746/3389)   remote: Counting objects:  23% (780/3389)   
remote: Counting objects:  24% (814/3389)   remote: Counting objects:  
25% (848/3389)   remote: Counting objects:  26% (882/3389)   
remote: Counting objects:  27% (916/3389)   remote: Counting objects:  
28% (949/3389)   remote: Counting objects:  29% (983/3389)   
remote: Counting objects:  30% (1017/3389)   remote: Counting objects:  
31% (1051/3389)   remote: Counting objects:  32% (1085/3389)   
remote: Counting objects:  33% (1119/3389)   remote: Counting objects:  
34% (1153/3389)   remote: Counting objects:  35% (1187/3389)   
remote: Counting objects:  36% (1221/3389)   remote: Counting objects:  
37% (1254/3389)   remote: Counting objects:  38% (1288/3389)   
remote: Counting objects:  39% (1322/3389)   remote: Counting objects:  
40% (1356/3389)   remote: Counting objects:  41% (1390/3389)   
remote: Counting objects:  42% (1424/3389)   remote: Counting objects:  
43% (1458/3389)   remote: Counting objects:  44% (1492/3389)   
remote: Counting objects:  45% (1526/3389)   remote: Counting objects:  
46% (1559/3389)   remote: Counting objects:  47% (1593/3389)   
remote: Counting objects:  48% (1627/3389)   remote: Counting objects:  
49% (1661/3389)   remote: Counting objects:  50% (1695/3389)   
remote: Counting objects:  51% (1729/3389)   remote: Counting objects:  
52% (1763/3389)   remote: Counting objects:  53% (1797/3389)   
remote: Counting objects:  54% (1831/3389)   remote: Counting objects:  
55% (1864/3389)   remote: Counting objects:  56% (1898/3389) 

Build failed in Jenkins: kafka-trunk-jdk8 #3089

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3571, done.
remote: Counting objects:   0% (1/3571)   remote: Counting objects:   
1% (36/3571)   remote: Counting objects:   2% (72/3571)   
remote: Counting objects:   3% (108/3571)   remote: Counting objects:   
4% (143/3571)   remote: Counting objects:   5% (179/3571)   
remote: Counting objects:   6% (215/3571)   remote: Counting objects:   
7% (250/3571)   remote: Counting objects:   8% (286/3571)   
remote: Counting objects:   9% (322/3571)   remote: Counting objects:  
10% (358/3571)   remote: Counting objects:  11% (393/3571)   
remote: Counting objects:  12% (429/3571)   remote: Counting objects:  
13% (465/3571)   remote: Counting objects:  14% (500/3571)   
remote: Counting objects:  15% (536/3571)   remote: Counting objects:  
16% (572/3571)   remote: Counting objects:  17% (608/3571)   
remote: Counting objects:  18% (643/3571)   remote: Counting objects:  
19% (679/3571)   remote: Counting objects:  20% (715/3571)   
remote: Counting objects:  21% (750/3571)   remote: Counting objects:  
22% (786/3571)   remote: Counting objects:  23% (822/3571)   
remote: Counting objects:  24% (858/3571)   remote: Counting objects:  
25% (893/3571)   remote: Counting objects:  26% (929/3571)   
remote: Counting objects:  27% (965/3571)   remote: Counting objects:  
28% (1000/3571)   remote: Counting objects:  29% (1036/3571)   
remote: Counting objects:  30% (1072/3571)   remote: Counting objects:  
31% (1108/3571)   remote: Counting objects:  32% (1143/3571)   
remote: Counting objects:  33% (1179/3571)   remote: Counting objects:  
34% (1215/3571)   remote: Counting objects:  35% (1250/3571)   
remote: Counting objects:  36% (1286/3571)   remote: Counting objects:  
37% (1322/3571)   remote: Counting objects:  38% (1357/3571)   
remote: Counting objects:  39% (1393/3571)   remote: Counting objects:  
40% (1429/3571)   remote: Counting objects:  41% (1465/3571)   
remote: Counting objects:  42% (1500/3571)   remote: Counting objects:  
43% (1536/3571)   remote: Counting objects:  44% (1572/3571)   
remote: Counting objects:  45% (1607/3571)   remote: Counting objects:  
46% (1643/3571)   remote: Counting objects:  47% (1679/3571)   
remote: Counting objects:  48% (1715/3571)   remote: Counting objects:  
49% (1750/3571)   remote: Counting objects:  50% (1786/3571)   
remote: Counting objects:  51% (1822/3571)   remote: Counting objects:  
52% (1857/3571)   remote: Counting objects:  53% (1893/3571)   
remote: Counting objects:  54% (1929/3571)   remote: Counting objects:  

[jira] [Resolved] (KAFKA-7366) topic level segment.bytes and segment.ms not taking effect immediately

2018-10-09 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-7366.

   Resolution: Fixed
Fix Version/s: 2.1.0

Merged to 2.1 and trunk.

> topic level segment.bytes and segment.ms not taking effect immediately
> --
>
> Key: KAFKA-7366
> URL: https://issues.apache.org/jira/browse/KAFKA-7366
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Jun Rao
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.1.0
>
>
> It used to be that topic level configs such as segment.bytes takes effect 
> immediately. Because of KAFKA-6324 in 1.1, those configs now only take effect 
> after the active segment has rolled. The relevant part of KAFKA-6324 is that 
> in Log.maybeRoll, the checking of the segment rolling is moved to 
> LogSegment.shouldRoll().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-10-09 Thread John Roesler
Hi Nikolay,

I have a proposal to improve the compatibility around your KIP... Do you
mind taking a look?

https://github.com/apache/kafka/pull/5759#issuecomment-428242210

Thanks,
-John

On Mon, Sep 24, 2018 at 3:44 PM Nikolay Izhikov  wrote:

> Hello, John.
>
> Tests in my PR is green now.
> Please, do the review.
>
> https://github.com/apache/kafka/pull/5682
>
> В Пн, 24/09/2018 в 20:36 +0300, Nikolay Izhikov пишет:
> > Hello, John.
> >
> > Thank you.
> >
> > There are failing tests in my PR.
> > I'm fixing them wright now.
> >
> > Will mail you in a next few hours, after all tests become green again.
> >
> > В Пн, 24/09/2018 в 11:46 -0500, John Roesler пишет:
> > > Hi Nikolay,
> > >
> > > Thanks for the PR. I will review it.
> > >
> > > -John
> > >
> > > On Sat, Sep 22, 2018 at 2:36 AM Nikolay Izhikov 
> wrote:
> > >
> > > > Hello
> > > >
> > > > I've opened a PR [1] for this KIP.
> > > >
> > > > [1] https://github.com/apache/kafka/pull/5682
> > > >
> > > > John, can you take a look?
> > > >
> > > > В Пн, 17/09/2018 в 20:16 +0300, Nikolay Izhikov пишет:
> > > > > John,
> > > > >
> > > > > Got it.
> > > > >
> > > > > Will do my best to meet this deadline.
> > > > >
> > > > > В Пн, 17/09/2018 в 11:52 -0500, John Roesler пишет:
> > > > > > Yay! Thanks so much for sticking with this Nikolay.
> > > > > >
> > > > > > I look forward to your PR!
> > > > > >
> > > > > > Not to put pressure on you, but just to let you know, the
> deadline for
> > > > > > getting your pr *merged* for 2.1 is _October 1st_,
> > > > > > so you basically have 2 weeks to send the PR, have the reviews,
> and
> > > >
> > > > get it
> > > > > > merged.
> > > > > >
> > > > > > (see
> > > > > >
> > > >
> > > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044)
> > > > > >
> > > > > > Thanks again,
> > > > > > -John
> > > > > >
> > > > > > On Mon, Sep 17, 2018 at 10:29 AM Nikolay Izhikov <
> nizhi...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > This KIP is now accepted with:
> > > > > > > - 3 binding +1
> > > > > > > - 2 non binding +1
> > > > > > >
> > > > > > > Thanks, all.
> > > > > > >
> > > > > > > Especially, John, Matthias, Guozhang, Bill, Damian!
> > > > > > >
> > > > > > > В Чт, 13/09/2018 в 22:16 -0700, Guozhang Wang пишет:
> > > > > > > > +1 (binding), thank you Nikolay!
> > > > > > > >
> > > > > > > > Guozhang
> > > > > > > >
> > > > > > > > On Thu, Sep 13, 2018 at 9:39 AM, Matthias J. Sax <
> > > >
> > > > matth...@confluent.io>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks for the KIP.
> > > > > > > > >
> > > > > > > > > +1 (binding)
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > -Matthias
> > > > > > > > >
> > > > > > > > > On 9/5/18 8:52 AM, John Roesler wrote:
> > > > > > > > > > I'm a +1 (non-binding)
> > > > > > > > > >
> > > > > > > > > > On Mon, Sep 3, 2018 at 8:33 AM Nikolay Izhikov <
> > > >
> > > > nizhi...@apache.org>
> > > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Dear commiters.
> > > > > > > > > > >
> > > > > > > > > > > Please, vote on a KIP.
> > > > > > > > > > >
> > > > > > > > > > > В Пт, 31/08/2018 в 12:05 -0500, John Roesler пишет:
> > > > > > > > > > > > Hi Nikolay,
> > > > > > > > > > > >
> > > > > > > > > > > > You can start a PR any time, but we cannot per it
> (and
> > > >
> > > > probably
> > > > > > >
> > > > > > > won't
> > > > > > > > >
> > > > > > > > > do
> > > > > > > > > > > > serious reviews) until after the KIP is voted and
> approved.
> > > > > > > > > > > >
> > > > > > > > > > > > Sometimes people start a PR during discussion just
> to help
> > > > > > >
> > > > > > > provide more
> > > > > > > > > > > > context, but it's not required (and can also be
> distracting
> > > > > > >
> > > > > > > because the
> > > > > > > > > > >
> > > > > > > > > > > KIP
> > > > > > > > > > > > discussion should avoid implementation details).
> > > > > > > > > > > >
> > > > > > > > > > > > Let's wait one more day for any other comments and
> plan to
> > > >
> > > > start
> > > > > > >
> > > > > > > the
> > > > > > > > >
> > > > > > > > > vote
> > > > > > > > > > > > on Monday if there are no other debates.
> > > > > > > > > > > >
> > > > > > > > > > > > Once you start the vote, you have to leave it up for
> at
> > > >
> > > > least 72
> > > > > > >
> > > > > > > hours,
> > > > > > > > > > >
> > > > > > > > > > > and
> > > > > > > > > > > > it requires 3 binding votes to pass. Only Kafka
> Committers
> > > >
> > > > have
> > > > > > >
> > > > > > > binding
> > > > > > > > > > > > votes (https://kafka.apache.org/committers).
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks,
> > > > > > > > > > > > -John
> > > > > > > > > > > >
> > > > > > > > > > > > On Fri, Aug 31, 2018 at 11:09 AM Bill Bejeck <
> > > >
> > > > bbej...@gmail.com>
> > > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > Hi Nickolay,
> > > > > > > > > > > > >
> > > > > > > > > > > > > Thanks for the clarifi

Build failed in Jenkins: kafka-trunk-jdk10 #610

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
error: Could not read a9692ff66fccc96ccf95526682136cddb5af0627
remote: Enumerating objects: 3607, done.
remote: Counting objects:   0% (1/3607)   remote: Counting objects:   
1% (37/3607)   remote: Counting objects:   2% (73/3607)   
remote: Counting objects:   3% (109/3607)   remote: Counting objects:   
4% (145/3607)   remote: Counting objects:   5% (181/3607)   
remote: Counting objects:   6% (217/3607)   remote: Counting objects:   
7% (253/3607)   remote: Counting objects:   8% (289/3607)   
remote: Counting objects:   9% (325/3607)   remote: Counting objects:  
10% (361/3607)   remote: Counting objects:  11% (397/3607)   
remote: Counting objects:  12% (433/3607)   remote: Counting objects:  
13% (469/3607)   remote: Counting objects:  14% (505/3607)   
remote: Counting objects:  15% (542/3607)   remote: Counting objects:  
16% (578/3607)   remote: Counting objects:  17% (614/3607)   
remote: Counting objects:  18% (650/3607)   remote: Counting objects:  
19% (686/3607)   remote: Counting objects:  20% (722/3607)   
remote: Counting objects:  21% (758/3607)   remote: Counting objects:  
22% (794/3607)   remote: Counting objects:  23% (830/3607)   
remote: Counting objects:  24% (866/3607)   remote: Counting objects:  
25% (902/3607)   remote: Counting objects:  26% (938/3607)   
remote: Counting objects:  27% (974/3607)   remote: Counting objects:  
28% (1010/3607)   remote: Counting objects:  29% (1047/3607)   
remote: Counting objects:  30% (1083/3607)   remote: Counting objects:  
31% (1119/3607)   remote: Counting objects:  32% (1155/3607)   
remote: Counting objects:  33% (1191/3607)   remote: Counting objects:  
34% (1227/3607)   remote: Counting objects:  35% (1263/3607)   
remote: Counting objects:  36% (1299/3607)   remote: Counting objects:  
37% (1335/3607)   remote: Counting objects:  38% (1371/3607)   
remote: Counting objects:  39% (1407/3607)   remote: Counting objects:  
40% (1443/3607)   remote: Counting objects:  41% (1479/3607)   
remote: Counting objects:  42% (1515/3607)   remote: Counting objects:  
43% (1552/3607)   remote: Counting objects:  44% (1588/3607)   
remote: Counting objects:  45% (1624/3607)   remote: Counting objects:  
46% (1660/3607)   remote: Counting objects:  47% (1696/3607)   
remote: Counting objects:  48% (1732/3607)   remote: Counting objects:  
49% (1768/3607)   remote: Counting objects:  50% (1804/3607)   
remote: Counting objects:  51% (1840/3607)   remote: Counting objects:  
52% (1876/3607)   remote: Counting objects:  53% (1912/3607)   
remote: Counting objects:  54% (1948/3607)   remote: Counting objects:  

Build failed in Jenkins: kafka-trunk-jdk8 #3090

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3587, done.
remote: Counting objects:   0% (1/3587)   remote: Counting objects:   
1% (36/3587)   remote: Counting objects:   2% (72/3587)   
remote: Counting objects:   3% (108/3587)   remote: Counting objects:   
4% (144/3587)   remote: Counting objects:   5% (180/3587)   
remote: Counting objects:   6% (216/3587)   remote: Counting objects:   
7% (252/3587)   remote: Counting objects:   8% (287/3587)   
remote: Counting objects:   9% (323/3587)   remote: Counting objects:  
10% (359/3587)   remote: Counting objects:  11% (395/3587)   
remote: Counting objects:  12% (431/3587)   remote: Counting objects:  
13% (467/3587)   remote: Counting objects:  14% (503/3587)   
remote: Counting objects:  15% (539/3587)   remote: Counting objects:  
16% (574/3587)   remote: Counting objects:  17% (610/3587)   
remote: Counting objects:  18% (646/3587)   remote: Counting objects:  
19% (682/3587)   remote: Counting objects:  20% (718/3587)   
remote: Counting objects:  21% (754/3587)   remote: Counting objects:  
22% (790/3587)   remote: Counting objects:  23% (826/3587)   
remote: Counting objects:  24% (861/3587)   remote: Counting objects:  
25% (897/3587)   remote: Counting objects:  26% (933/3587)   
remote: Counting objects:  27% (969/3587)   remote: Counting objects:  
28% (1005/3587)   remote: Counting objects:  29% (1041/3587)   
remote: Counting objects:  30% (1077/3587)   remote: Counting objects:  
31% (1112/3587)   remote: Counting objects:  32% (1148/3587)   
remote: Counting objects:  33% (1184/3587)   remote: Counting objects:  
34% (1220/3587)   remote: Counting objects:  35% (1256/3587)   
remote: Counting objects:  36% (1292/3587)   remote: Counting objects:  
37% (1328/3587)   remote: Counting objects:  38% (1364/3587)   
remote: Counting objects:  39% (1399/3587)   remote: Counting objects:  
40% (1435/3587)   remote: Counting objects:  41% (1471/3587)   
remote: Counting objects:  42% (1507/3587)   remote: Counting objects:  
43% (1543/3587)   remote: Counting objects:  44% (1579/3587)   
remote: Counting objects:  45% (1615/3587)   remote: Counting objects:  
46% (1651/3587)   remote: Counting objects:  47% (1686/3587)   
remote: Counting objects:  48% (1722/3587)   remote: Counting objects:  
49% (1758/3587)   remote: Counting objects:  50% (1794/3587)   
remote: Counting objects:  51% (1830/3587)   remote: Counting objects:  
52% (1866/3587)   remote: Counting objects:  53% (1902/3587)   
remote: Counting objects:  54% (1937/3587)   remote: Counting objects:  

[jira] [Resolved] (KAFKA-3097) Acls for PrincipalType User are case sensitive

2018-10-09 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-3097.

   Resolution: Fixed
 Assignee: Manikumar  (was: Ashish Singh)
Fix Version/s: 2.1.0

Merged to 2.1 and trunk.

> Acls for PrincipalType User are case sensitive
> --
>
> Key: KAFKA-3097
> URL: https://issues.apache.org/jira/browse/KAFKA-3097
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.1.0
>
>
> I gave  a user acls for READ/WRITE but when I went to actually write to the 
> topic failed with auth exception. I figured out it was due to me specifying 
> the user as:  user:tgraves rather then User:tgraves.
> Seems like It should either fail on assign or be case insensitive.
> The principal type of User should also probably be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #611

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
error: Could not read 57d7f11e38e41892191f6fe87faae8f23aa0362e
remote: Enumerating objects: 4624, done.
remote: Counting objects:   0% (1/4624)   remote: Counting objects:   
1% (47/4624)   remote: Counting objects:   2% (93/4624)   
remote: Counting objects:   3% (139/4624)   remote: Counting objects:   
4% (185/4624)   remote: Counting objects:   5% (232/4624)   
remote: Counting objects:   6% (278/4624)   remote: Counting objects:   
7% (324/4624)   remote: Counting objects:   8% (370/4624)   
remote: Counting objects:   9% (417/4624)   remote: Counting objects:  
10% (463/4624)   remote: Counting objects:  11% (509/4624)   
remote: Counting objects:  12% (555/4624)   remote: Counting objects:  
13% (602/4624)   remote: Counting objects:  14% (648/4624)   
remote: Counting objects:  15% (694/4624)   remote: Counting objects:  
16% (740/4624)   remote: Counting objects:  17% (787/4624)   
remote: Counting objects:  18% (833/4624)   remote: Counting objects:  
19% (879/4624)   remote: Counting objects:  20% (925/4624)   
remote: Counting objects:  21% (972/4624)   remote: Counting objects:  
22% (1018/4624)   remote: Counting objects:  23% (1064/4624)   
remote: Counting objects:  24% (1110/4624)   remote: Counting objects:  
25% (1156/4624)   remote: Counting objects:  26% (1203/4624)   
remote: Counting objects:  27% (1249/4624)   remote: Counting objects:  
28% (1295/4624)   remote: Counting objects:  29% (1341/4624)   
remote: Counting objects:  30% (1388/4624)   remote: Counting objects:  
31% (1434/4624)   remote: Counting objects:  32% (1480/4624)   
remote: Counting objects:  33% (1526/4624)   remote: Counting objects:  
34% (1573/4624)   remote: Counting objects:  35% (1619/4624)   
remote: Counting objects:  36% (1665/4624)   remote: Counting objects:  
37% (1711/4624)   remote: Counting objects:  38% (1758/4624)   
remote: Counting objects:  39% (1804/4624)   remote: Counting objects:  
40% (1850/4624)   remote: Counting objects:  41% (1896/4624)   
remote: Counting objects:  42% (1943/4624)   remote: Counting objects:  
43% (1989/4624)   remote: Counting objects:  44% (2035/4624)   
remote: Counting objects:  45% (2081/4624)   remote: Counting objects:  
46% (2128/4624)   remote: Counting objects:  47% (2174/4624)   
remote: Counting objects:  48% (2220/4624)   remote: Counting objects:  
49% (2266/4624)   remote: Counting objects:  50% (2312/4624)   
remote: Counting objects:  51% (2359/4624)   remote: Counting objects:  
52% (2405/4624)   remote: Counting objects:  53% (2451/4624)   
remote: Counting objects:  54% (2497/4624)   remote: Counting objects:  

Build failed in Jenkins: kafka-trunk-jdk8 #3091

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
remote: Enumerating objects: 4602, done.
remote: Counting objects:   0% (1/4602)   remote: Counting objects:   
1% (47/4602)   remote: Counting objects:   2% (93/4602)   
remote: Counting objects:   3% (139/4602)   remote: Counting objects:   
4% (185/4602)   remote: Counting objects:   5% (231/4602)   
remote: Counting objects:   6% (277/4602)   remote: Counting objects:   
7% (323/4602)   remote: Counting objects:   8% (369/4602)   
remote: Counting objects:   9% (415/4602)   remote: Counting objects:  
10% (461/4602)   remote: Counting objects:  11% (507/4602)   
remote: Counting objects:  12% (553/4602)   remote: Counting objects:  
13% (599/4602)   remote: Counting objects:  14% (645/4602)   
remote: Counting objects:  15% (691/4602)   remote: Counting objects:  
16% (737/4602)   remote: Counting objects:  17% (783/4602)   
remote: Counting objects:  18% (829/4602)   remote: Counting objects:  
19% (875/4602)   remote: Counting objects:  20% (921/4602)   
remote: Counting objects:  21% (967/4602)   remote: Counting objects:  
22% (1013/4602)   remote: Counting objects:  23% (1059/4602)   
remote: Counting objects:  24% (1105/4602)   remote: Counting objects:  
25% (1151/4602)   remote: Counting objects:  26% (1197/4602)   
remote: Counting objects:  27% (1243/4602)   remote: Counting objects:  
28% (1289/4602)   remote: Counting objects:  29% (1335/4602)   
remote: Counting objects:  30% (1381/4602)   remote: Counting objects:  
31% (1427/4602)   remote: Counting objects:  32% (1473/4602)   
remote: Counting objects:  33% (1519/4602)   remote: Counting objects:  
34% (1565/4602)   remote: Counting objects:  35% (1611/4602)   
remote: Counting objects:  36% (1657/4602)   remote: Counting objects:  
37% (1703/4602)   remote: Counting objects:  38% (1749/4602)   
remote: Counting objects:  39% (1795/4602)   remote: Counting objects:  
40% (1841/4602)   remote: Counting objects:  41% (1887/4602)   
remote: Counting objects:  42% (1933/4602)   remote: Counting objects:  
43% (1979/4602)   remote: Counting objects:  44% (2025/4602)   
remote: Counting objects:  45% (2071/4602)   remote: Counting objects:  
46% (2117/4602)   remote: Counting objects:  47% (2163/4602)   
remote: Counting objects:  48% (2209/4602)   remote: Counting objects:  
49% (2255/4602)   remote: Counting objects:  50% (2301/4602)   
remote: Counting objects:  51% (2348/4602)   remote: Counting objects:  
52% (2394/4602)   remote: Counting objects:  53% (2440/4602)   
remote: Counting objects:  54% (2486/4602)   remote: Counting objects:  
55% (2532/4602)   remote: Counting objects:  56% (2578/4

Jenkins build is back to normal : kafka-1.1-jdk7 #222

2018-10-09 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-371: Add a configuration to build custom SSL principal name

2018-10-09 Thread Jun Rao
Hi, Mani,

Thanks for the KIP. +1 from me.

Jun

On Wed, Sep 19, 2018 at 5:19 AM, Manikumar 
wrote:

> Hi All,
>
> I would like to start voting on KIP-371, which adds a configuration option
> for building custom SSL principal names.
>
> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
>
> Discussion Thread:
> https://lists.apache.org/thread.html/e346f5e3e3dd1feb863594e40eac1e
> d54138613a667f319b99344710@%3Cdev.kafka.apache.org%3E
>
> Thanks,
> Manikumar
>


Build failed in Jenkins: kafka-2.0-jdk8 #165

2018-10-09 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7198: Enhance KafkaStreams start method javadoc. (#5763)

--
[...truncated 2.50 MB...]

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
tableNamedMaterializedCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
tableNamedMaterializedCountShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
kTableNamedMaterializedMapValuesShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
kTableNamedMaterializedMapValuesShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
timeWindowZeroArgCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
timeWindowZeroArgCountShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithProcessorsShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithProcessorsShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.TopologyTest > shouldThrowOnUnassignedStateStoreAccess 
STARTED

org.apache.kafka.streams.TopologyTest > shouldThrowOnUnassignedStateStoreAccess 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
kTableAnonymousMaterializedMapValuesShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
kTableAnonymousMaterializedMapValuesShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
sessionWindowZeroArgCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
sessionWindowZeroArgCountShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
tableAnonymousMaterializedCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
tableAnonymousMaterializedCountShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > shouldDescribeGlobalStoreTopology 
STARTED

org.apache.kafka.streams.TopologyTest > shouldDescribeGlobalStoreTopology PASSED

org.apache.kafka.streams.TopologyTest > 
kTableNonMaterializedMapValuesShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTest > 
kTableNonMaterializedMapValuesShouldPreserveTopologyStructure PASSED

org.apache.kafka.streams.TopologyTest > 
kGroupedStreamAnonymousMaterializedCountShouldPreserveTopologyStructure STARTED

org.apache.kafka.streams.TopologyTes

Build failed in Jenkins: kafka-trunk-jdk10 #612

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
error: Could not read 57d7f11e38e41892191f6fe87faae8f23aa0362e
remote: Enumerating objects: 4638, done.
remote: Counting objects:   0% (1/4638)   remote: Counting objects:   
1% (47/4638)   remote: Counting objects:   2% (93/4638)   
remote: Counting objects:   3% (140/4638)   remote: Counting objects:   
4% (186/4638)   remote: Counting objects:   5% (232/4638)   
remote: Counting objects:   6% (279/4638)   remote: Counting objects:   
7% (325/4638)   remote: Counting objects:   8% (372/4638)   
remote: Counting objects:   9% (418/4638)   remote: Counting objects:  
10% (464/4638)   remote: Counting objects:  11% (511/4638)   
remote: Counting objects:  12% (557/4638)   remote: Counting objects:  
13% (603/4638)   remote: Counting objects:  14% (650/4638)   
remote: Counting objects:  15% (696/4638)   remote: Counting objects:  
16% (743/4638)   remote: Counting objects:  17% (789/4638)   
remote: Counting objects:  18% (835/4638)   remote: Counting objects:  
19% (882/4638)   remote: Counting objects:  20% (928/4638)   
remote: Counting objects:  21% (974/4638)   remote: Counting objects:  
22% (1021/4638)   remote: Counting objects:  23% (1067/4638)   
remote: Counting objects:  24% (1114/4638)   remote: Counting objects:  
25% (1160/4638)   remote: Counting objects:  26% (1206/4638)   
remote: Counting objects:  27% (1253/4638)   remote: Counting objects:  
28% (1299/4638)   remote: Counting objects:  29% (1346/4638)   
remote: Counting objects:  30% (1392/4638)   remote: Counting objects:  
31% (1438/4638)   remote: Counting objects:  32% (1485/4638)   
remote: Counting objects:  33% (1531/4638)   remote: Counting objects:  
34% (1577/4638)   remote: Counting objects:  35% (1624/4638)   
remote: Counting objects:  36% (1670/4638)   remote: Counting objects:  
37% (1717/4638)   remote: Counting objects:  38% (1763/4638)   
remote: Counting objects:  39% (1809/4638)   remote: Counting objects:  
40% (1856/4638)   remote: Counting objects:  41% (1902/4638)   
remote: Counting objects:  42% (1948/4638)   remote: Counting objects:  
43% (1995/4638)   remote: Counting objects:  44% (2041/4638)   
remote: Counting objects:  45% (2088/4638)   remote: Counting objects:  
46% (2134/4638)   remote: Counting objects:  47% (2180/4638)   
remote: Counting objects:  48% (2227/4638)   remote: Counting objects:  
49% (2273/4638)   remote: Counting objects:  50% (2319/4638)   
remote: Counting objects:  51% (2366/4638)   remote: Counting objects:  
52% (2412/4638)   remote: Counting objects:  53% (2459/4638)   
remote: Counting objects:  54% (2505/4638)   remote: Counting objects:  

[jira] [Created] (KAFKA-7493) Rewrite test_broker_type_bounce_at_start

2018-10-09 Thread John Roesler (JIRA)
John Roesler created KAFKA-7493:
---

 Summary: Rewrite test_broker_type_bounce_at_start
 Key: KAFKA-7493
 URL: https://issues.apache.org/jira/browse/KAFKA-7493
 Project: Kafka
  Issue Type: Improvement
  Components: streams, system tests
Reporter: John Roesler


Currently, the test test_broker_type_bounce_at_start in 
streams_broker_bounce_test.py is ignored.

As written, there are a couple of race conditions that lead to flakiness.

It should be possible to re-write the test to wait on log messages, as the 
other tests do, instead of just sleeping to more deterministically transition 
the test from one state to the next.

Once the test is fixed, the fix should be back-ported to all prior branches, 
back to 0.10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #613

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
error: Could not read 57d7f11e38e41892191f6fe87faae8f23aa0362e
remote: Enumerating objects: 4638, done.
remote: Counting objects:   0% (1/4638)   remote: Counting objects:   
1% (47/4638)   remote: Counting objects:   2% (93/4638)   
remote: Counting objects:   3% (140/4638)   remote: Counting objects:   
4% (186/4638)   remote: Counting objects:   5% (232/4638)   
remote: Counting objects:   6% (279/4638)   remote: Counting objects:   
7% (325/4638)   remote: Counting objects:   8% (372/4638)   
remote: Counting objects:   9% (418/4638)   remote: Counting objects:  
10% (464/4638)   remote: Counting objects:  11% (511/4638)   
remote: Counting objects:  12% (557/4638)   remote: Counting objects:  
13% (603/4638)   remote: Counting objects:  14% (650/4638)   
remote: Counting objects:  15% (696/4638)   remote: Counting objects:  
16% (743/4638)   remote: Counting objects:  17% (789/4638)   
remote: Counting objects:  18% (835/4638)   remote: Counting objects:  
19% (882/4638)   remote: Counting objects:  20% (928/4638)   
remote: Counting objects:  21% (974/4638)   remote: Counting objects:  
22% (1021/4638)   remote: Counting objects:  23% (1067/4638)   
remote: Counting objects:  24% (1114/4638)   remote: Counting objects:  
25% (1160/4638)   remote: Counting objects:  26% (1206/4638)   
remote: Counting objects:  27% (1253/4638)   remote: Counting objects:  
28% (1299/4638)   remote: Counting objects:  29% (1346/4638)   
remote: Counting objects:  30% (1392/4638)   remote: Counting objects:  
31% (1438/4638)   remote: Counting objects:  32% (1485/4638)   
remote: Counting objects:  33% (1531/4638)   remote: Counting objects:  
34% (1577/4638)   remote: Counting objects:  35% (1624/4638)   
remote: Counting objects:  36% (1670/4638)   remote: Counting objects:  
37% (1717/4638)   remote: Counting objects:  38% (1763/4638)   
remote: Counting objects:  39% (1809/4638)   remote: Counting objects:  
40% (1856/4638)   remote: Counting objects:  41% (1902/4638)   
remote: Counting objects:  42% (1948/4638)   remote: Counting objects:  
43% (1995/4638)   remote: Counting objects:  44% (2041/4638)   
remote: Counting objects:  45% (2088/4638)   remote: Counting objects:  
46% (2134/4638)   remote: Counting objects:  47% (2180/4638)   
remote: Counting objects:  48% (2227/4638)   remote: Counting objects:  
49% (2273/4638)   remote: Counting objects:  50% (2319/4638)   
remote: Counting objects:  51% (2366/4638)   remote: Counting objects:  
52% (2412/4638)   remote: Counting objects:  53% (2459/4638)   
remote: Counting objects:  54% (2505/4638)   remote: Counting objects:  

Build failed in Jenkins: kafka-trunk-jdk10 #614

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
error: Could not read 57d7f11e38e41892191f6fe87faae8f23aa0362e
remote: Enumerating objects: 4638, done.
remote: Counting objects:   0% (1/4638)   remote: Counting objects:   
1% (47/4638)   remote: Counting objects:   2% (93/4638)   
remote: Counting objects:   3% (140/4638)   remote: Counting objects:   
4% (186/4638)   remote: Counting objects:   5% (232/4638)   
remote: Counting objects:   6% (279/4638)   remote: Counting objects:   
7% (325/4638)   remote: Counting objects:   8% (372/4638)   
remote: Counting objects:   9% (418/4638)   remote: Counting objects:  
10% (464/4638)   remote: Counting objects:  11% (511/4638)   
remote: Counting objects:  12% (557/4638)   remote: Counting objects:  
13% (603/4638)   remote: Counting objects:  14% (650/4638)   
remote: Counting objects:  15% (696/4638)   remote: Counting objects:  
16% (743/4638)   remote: Counting objects:  17% (789/4638)   
remote: Counting objects:  18% (835/4638)   remote: Counting objects:  
19% (882/4638)   remote: Counting objects:  20% (928/4638)   
remote: Counting objects:  21% (974/4638)   remote: Counting objects:  
22% (1021/4638)   remote: Counting objects:  23% (1067/4638)   
remote: Counting objects:  24% (1114/4638)   remote: Counting objects:  
25% (1160/4638)   remote: Counting objects:  26% (1206/4638)   
remote: Counting objects:  27% (1253/4638)   remote: Counting objects:  
28% (1299/4638)   remote: Counting objects:  29% (1346/4638)   
remote: Counting objects:  30% (1392/4638)   remote: Counting objects:  
31% (1438/4638)   remote: Counting objects:  32% (1485/4638)   
remote: Counting objects:  33% (1531/4638)   remote: Counting objects:  
34% (1577/4638)   remote: Counting objects:  35% (1624/4638)   
remote: Counting objects:  36% (1670/4638)   remote: Counting objects:  
37% (1717/4638)   remote: Counting objects:  38% (1763/4638)   
remote: Counting objects:  39% (1809/4638)   remote: Counting objects:  
40% (1856/4638)   remote: Counting objects:  41% (1902/4638)   
remote: Counting objects:  42% (1948/4638)   remote: Counting objects:  
43% (1995/4638)   remote: Counting objects:  44% (2041/4638)   
remote: Counting objects:  45% (2088/4638)   remote: Counting objects:  
46% (2134/4638)   remote: Counting objects:  47% (2180/4638)   
remote: Counting objects:  48% (2227/4638)   remote: Counting objects:  
49% (2273/4638)   remote: Counting objects:  50% (2319/4638)   
remote: Counting objects:  51% (2366/4638)   remote: Counting objects:  
52% (2412/4638)   remote: Counting objects:  53% (2459/4638)   
remote: Counting objects:  54% (2505/4638)   remote: Counting objects:  

Build failed in Jenkins: kafka-2.1-jdk8 #10

2018-10-09 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7198: Enhance KafkaStreams start method javadoc. (#5763)

--
[...truncated 2.70 MB...]
> Task :streams:examples:compileJava
:38:
 warning: [deprecation] Serialized in org.apache.kafka.streams.kstream has been 
deprecated
import org.apache.kafka.streams.kstream.Serialized;
   ^
:36:
 warning: [deprecation] Serialized in org.apache.kafka.streams.kstream has been 
deprecated
import org.apache.kafka.streams.kstream.Serialized;
   ^
:90:
 warning: [deprecation] Serialized in org.apache.kafka.streams.kstream has been 
deprecated
.groupByKey(Serialized.with(Serdes.String(), jsonSerde))
^
:90:
 warning: [deprecation] groupByKey(Serialized) in KStream has been 
deprecated
.groupByKey(Serialized.with(Serdes.String(), jsonSerde))
^
  where K,V are type-variables:
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:209:
 warning: [deprecation] Serialized in org.apache.kafka.streams.kstream has been 
deprecated
.groupByKey(Serialized.with(Serdes.String(), new JSONSerde<>()))
^
:209:
 warning: [deprecation] groupByKey(Serialized) in KStream has been 
deprecated
.groupByKey(Serialized.with(Serdes.String(), new JSONSerde<>()))
^
  where K,V are type-variables:
K extends Object declared in interface KStream
V extends Object declared in interface KStream
6 warnings

> Task :streams:examples:processResources NO-SOURCE
> Task :streams:examples:classes
> Task :streams:examples:checkstyleMain
> Task :streams:examples:compileTestJava
> Task :streams:examples:processTestResources NO-SOURCE
> Task :streams:examples:testClasses
> Task :streams:examples:checkstyleTest
> Task :streams:examples:spotbugsMain

> Task :streams:examples:test

org.apache.kafka.streams.examples.wordcount.WordCountProcessorTest > test 
STARTED

org.apache.kafka.streams.examples.wordcount.WordCountProcessorTest > test PASSED

> Task :spotlessScala UP-TO-DATE
> Task :spotlessScalaCheck UP-TO-DATE
> Task :streams:streams-scala:compileJava NO-SOURCE

> Task :streams:streams-scala:compileScala
Pruning sources from previous analysis, due to incompatible CompileSetup.
:382:
 method groupByKey in trait KStream is deprecated: see corresponding Javadoc 
for more information.
inner.groupByKey(serialized)
  ^
:416:
 method groupBy in trait KStream is deprecated: see corresponding Javadoc for 
more information.
inner.groupBy(selector.asKeyValueMapper, serialized)
  ^
:224:
 method groupBy in trait KTable is deprecated: see corresponding Javadoc for 
more information.
inner.groupBy(selector.asKeyValueMapper, serialized)
  ^
:34:
 class Serialized in package kstream is deprecated: see corresponding Javadoc 
for more information.
  def `with`[K, V](implicit keySerde: Serde[K], valueSerde: Serde[V]): 
SerializedJ[K, V] =
   ^
:23:
 class Serialized in package kstream is deprecated: see corresponding Javadoc 
for more information.
  type Serialized[K, V] = org.apache.kafka.streams.kstream.Serialized[K, V]

Re: [VOTE] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-10-09 Thread Nikolay Izhikov
Hello, John

I responded in discussion thread.

I'm +1 for your proposal.

В Вт, 09/10/2018 в 13:05 -0500, John Roesler пишет:
> Hi Nikolay,
> 
> I have a proposal to improve the compatibility around your KIP... Do you
> mind taking a look?
> 
> https://github.com/apache/kafka/pull/5759#issuecomment-428242210
> 
> Thanks,
> -John
> 
> On Mon, Sep 24, 2018 at 3:44 PM Nikolay Izhikov  wrote:
> 
> > Hello, John.
> > 
> > Tests in my PR is green now.
> > Please, do the review.
> > 
> > https://github.com/apache/kafka/pull/5682
> > 
> > В Пн, 24/09/2018 в 20:36 +0300, Nikolay Izhikov пишет:
> > > Hello, John.
> > > 
> > > Thank you.
> > > 
> > > There are failing tests in my PR.
> > > I'm fixing them wright now.
> > > 
> > > Will mail you in a next few hours, after all tests become green again.
> > > 
> > > В Пн, 24/09/2018 в 11:46 -0500, John Roesler пишет:
> > > > Hi Nikolay,
> > > > 
> > > > Thanks for the PR. I will review it.
> > > > 
> > > > -John
> > > > 
> > > > On Sat, Sep 22, 2018 at 2:36 AM Nikolay Izhikov 
> > 
> > wrote:
> > > > 
> > > > > Hello
> > > > > 
> > > > > I've opened a PR [1] for this KIP.
> > > > > 
> > > > > [1] https://github.com/apache/kafka/pull/5682
> > > > > 
> > > > > John, can you take a look?
> > > > > 
> > > > > В Пн, 17/09/2018 в 20:16 +0300, Nikolay Izhikov пишет:
> > > > > > John,
> > > > > > 
> > > > > > Got it.
> > > > > > 
> > > > > > Will do my best to meet this deadline.
> > > > > > 
> > > > > > В Пн, 17/09/2018 в 11:52 -0500, John Roesler пишет:
> > > > > > > Yay! Thanks so much for sticking with this Nikolay.
> > > > > > > 
> > > > > > > I look forward to your PR!
> > > > > > > 
> > > > > > > Not to put pressure on you, but just to let you know, the
> > 
> > deadline for
> > > > > > > getting your pr *merged* for 2.1 is _October 1st_,
> > > > > > > so you basically have 2 weeks to send the PR, have the reviews,
> > 
> > and
> > > > > 
> > > > > get it
> > > > > > > merged.
> > > > > > > 
> > > > > > > (see
> > > > > > > 
> > > > > 
> > > > > 
> > 
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044)
> > > > > > > 
> > > > > > > Thanks again,
> > > > > > > -John
> > > > > > > 
> > > > > > > On Mon, Sep 17, 2018 at 10:29 AM Nikolay Izhikov <
> > 
> > nizhi...@apache.org>
> > > > > > > wrote:
> > > > > > > 
> > > > > > > > This KIP is now accepted with:
> > > > > > > > - 3 binding +1
> > > > > > > > - 2 non binding +1
> > > > > > > > 
> > > > > > > > Thanks, all.
> > > > > > > > 
> > > > > > > > Especially, John, Matthias, Guozhang, Bill, Damian!
> > > > > > > > 
> > > > > > > > В Чт, 13/09/2018 в 22:16 -0700, Guozhang Wang пишет:
> > > > > > > > > +1 (binding), thank you Nikolay!
> > > > > > > > > 
> > > > > > > > > Guozhang
> > > > > > > > > 
> > > > > > > > > On Thu, Sep 13, 2018 at 9:39 AM, Matthias J. Sax <
> > > > > 
> > > > > matth...@confluent.io>
> > > > > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > > Thanks for the KIP.
> > > > > > > > > > 
> > > > > > > > > > +1 (binding)
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > -Matthias
> > > > > > > > > > 
> > > > > > > > > > On 9/5/18 8:52 AM, John Roesler wrote:
> > > > > > > > > > > I'm a +1 (non-binding)
> > > > > > > > > > > 
> > > > > > > > > > > On Mon, Sep 3, 2018 at 8:33 AM Nikolay Izhikov <
> > > > > 
> > > > > nizhi...@apache.org>
> > > > > > > > > > 
> > > > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > > Dear commiters.
> > > > > > > > > > > > 
> > > > > > > > > > > > Please, vote on a KIP.
> > > > > > > > > > > > 
> > > > > > > > > > > > В Пт, 31/08/2018 в 12:05 -0500, John Roesler пишет:
> > > > > > > > > > > > > Hi Nikolay,
> > > > > > > > > > > > > 
> > > > > > > > > > > > > You can start a PR any time, but we cannot per it
> > 
> > (and
> > > > > 
> > > > > probably
> > > > > > > > 
> > > > > > > > won't
> > > > > > > > > > 
> > > > > > > > > > do
> > > > > > > > > > > > > serious reviews) until after the KIP is voted and
> > 
> > approved.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Sometimes people start a PR during discussion just
> > 
> > to help
> > > > > > > > 
> > > > > > > > provide more
> > > > > > > > > > > > > context, but it's not required (and can also be
> > 
> > distracting
> > > > > > > > 
> > > > > > > > because the
> > > > > > > > > > > > 
> > > > > > > > > > > > KIP
> > > > > > > > > > > > > discussion should avoid implementation details).
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Let's wait one more day for any other comments and
> > 
> > plan to
> > > > > 
> > > > > start
> > > > > > > > 
> > > > > > > > the
> > > > > > > > > > 
> > > > > > > > > > vote
> > > > > > > > > > > > > on Monday if there are no other debates.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Once you start the vote, you have to leave it up for
> > 
> > at
> > > > > 
> > > > > least 72
> > > > > > > > 
> > > > > > > > hours,
> > > > > > > > > > > > 
> > > > > > > > > > > > and
> > > > > > > > > > > > > it requires 

Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-09 Thread Lucas Wang
Thanks Jun, I've updated the KIP with the new names.

Hi Joel, Becket, Dong, Ismael,
Since you've reviewed this KIP in the past, can you please review it again?
Thanks a lot!

Lucas

On Mon, Oct 8, 2018 at 6:10 PM Jun Rao  wrote:

> Hi, Lucas,
>
> Yes, the new names sound good to me.
>
> Thanks,
>
> Jun
>
> On Fri, Oct 5, 2018 at 1:12 PM, Lucas Wang  wrote:
>
> > Thanks for the suggestion, Ismael. I like it.
> >
> > Jun,
> > I'm excited to get the +1, thanks a lot!
> > Meanwhile what do you feel about renaming the metrics and config to
> >
> > ControlPlaneRequestQueueSize
> >
> > ControlPlaneNetworkProcessorIdlePercent
> >
> > ControlPlaneRequestHandlerIdlePercent
> >
> > control.plane.listener.name
> >
> > ?
> >
> >
> > Thanks,
> >
> > Lucas
> >
> > On Thu, Oct 4, 2018 at 11:38 AM Ismael Juma  wrote:
> >
> > > Have we considered control plane if we think control by itself is
> > > ambiguous? I agree with the original concern that "controller" may be
> > > confusing for something that affects all brokers.
> > >
> > > Ismael
> > >
> > >
> > > On 4 Oct 2018 11:08 am, "Lucas Wang"  wrote:
> > >
> > > Thanks Jun. I've changed the KIP with the suggested 2 step upgrade.
> > > Please take a look again when you have time.
> > >
> > > Regards,
> > > Lucas
> > >
> > >
> > > On Thu, Oct 4, 2018 at 10:06 AM Jun Rao  wrote:
> > >
> > > > Hi, Lucas,
> > > >
> > > > 200. That's a valid concern. So, we can probably just keep the
> current
> > > > name.
> > > >
> > > > 201. I am thinking that you would upgrade in the same way as changing
> > > > inter.broker.listener.name. This requires 2 rounds of rolling
> restart.
> > > In
> > > > the first round, we add the controller endpoint to the listeners w/o
> > > > setting controller.listener.name. In the second round, every broker
> > sets
> > > > controller.listener.name. At that point, the controller listener is
> > > ready
> > > > in every broker.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Tue, Oct 2, 2018 at 10:38 AM, Lucas Wang 
> > > wrote:
> > > >
> > > > > Thanks for the further comments, Jun.
> > > > >
> > > > > 200. Currently in the code base, we have the term of "ControlBatch"
> > > > related
> > > > > to
> > > > > idempotent/transactional producing. Do you think it's a concern for
> > > > reusing
> > > > > the term "control"?
> > > > >
> > > > > 201. It's not clear to me how it would work by following the same
> > > > strategy
> > > > > for "controller.listener.name".
> > > > > Say the new controller has its "controller.listener.name" set to
> the
> > > > value
> > > > > "CONTROLLER", and broker 1
> > > > > has picked up this KIP by announcing
> > > > > "endpoints": [
> > > > > "CONTROLLER://broker1.example.com:9091",
> > > > > "INTERNAL://broker1.example.com:9092",
> > > > > "EXTERNAL://host1.example.com:9093"
> > > > > ],
> > > > >
> > > > > while broker2 has not picked up the change, and is announcing
> > > > > "endpoints": [
> > > > > "INTERNAL://broker2.example.com:9092",
> > > > > "EXTERNAL://host2.example.com:9093"
> > > > > ],
> > > > > to support both broker 1 for the new behavior and broker 2 for the
> > old
> > > > > behavior, it seems the controller must
> > > > > check their published endpoints. Am I missing something?
> > > > >
> > > > > Thanks!
> > > > > Lucas
> > > > >
> > > > > On Mon, Oct 1, 2018 at 6:29 PM Jun Rao  wrote:
> > > > >
> > > > > > Hi, Lucas,
> > > > > >
> > > > > > Sorry for the delay. The updated wiki looks good to me overall.
> > Just
> > > a
> > > > > > couple more minor comments.
> > > > > >
> > > > > > 200.
> > > kafka.network:name=ControllerRequestQueueSize,type=RequestChannel:
> > > > > The
> > > > > > name ControllerRequestQueueSize gives the impression that it's
> only
> > > for
> > > > > the
> > > > > > controller broker. Perhaps we can just rename all metrics and
> > configs
> > > > > from
> > > > > > controller to control. This indicates that the threads and the
> > queues
> > > > are
> > > > > > for the control requests (as oppose to data requests).
> > > > > >
> > > > > > 201. ": In this scenario, the
> > controller
> > > > will
> > > > > > have the "controller.listener.name" config set to a value like
> > > > > > "CONTROLLER", however the broker's exposed endpoints do not have
> an
> > > > entry
> > > > > > corresponding to the new listener name. Hence the controller
> should
> > > > > > preserve the existing behavior by determining the endpoint using
> > > > > > *inter-broker-listener-name *value. The end result should be the
> > same
> > > > > > behavior as today." Currently, the controller makes connections
> > based
> > > > on
> > > > > > its local inter.broker.listener.name config without checking the
> > > > target
> > > > > > broker's ZK registration. For consistency, perhaps we can just
> > follow
> > > > the
> > > > > > same strategy for controller.listener.name. This existing
> behavior
> > > > seems
> > > > > > simpler to understand and has t

Re: [VOTE] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-10-09 Thread Nikolay Izhikov
In process of implementation of this KIP we discussed with John Roesler and 
Matthias J. Sax two additional changes:

1. Changes in KafkaStream#close semantics [1] :

* reject negative numbers
* make 0 just signal and return immediately (after checking the state 
once)

2. Default implementations of `fetch` methods in WindowStore [2]

This changes added to KIP [3]

[1] 
https://github.com/apache/kafka/commit/6d16879c0ffc4b52a544a8664329d09101832964
[2] https://github.com/apache/kafka/pull/5759
[3] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-358%3A+Migrate+Streams+API+to+Duration+instead+of+long+ms+times



В Ср, 10/10/2018 в 01:16 +0300, Nikolay Izhikov пишет:
> Hello, John
> 
> I responded in discussion thread.
> 
> I'm +1 for your proposal.
> 
> В Вт, 09/10/2018 в 13:05 -0500, John Roesler пишет:
> > Hi Nikolay,
> > 
> > I have a proposal to improve the compatibility around your KIP... Do you
> > mind taking a look?
> > 
> > https://github.com/apache/kafka/pull/5759#issuecomment-428242210
> > 
> > Thanks,
> > -John
> > 
> > On Mon, Sep 24, 2018 at 3:44 PM Nikolay Izhikov  wrote:
> > 
> > > Hello, John.
> > > 
> > > Tests in my PR is green now.
> > > Please, do the review.
> > > 
> > > https://github.com/apache/kafka/pull/5682
> > > 
> > > В Пн, 24/09/2018 в 20:36 +0300, Nikolay Izhikov пишет:
> > > > Hello, John.
> > > > 
> > > > Thank you.
> > > > 
> > > > There are failing tests in my PR.
> > > > I'm fixing them wright now.
> > > > 
> > > > Will mail you in a next few hours, after all tests become green again.
> > > > 
> > > > В Пн, 24/09/2018 в 11:46 -0500, John Roesler пишет:
> > > > > Hi Nikolay,
> > > > > 
> > > > > Thanks for the PR. I will review it.
> > > > > 
> > > > > -John
> > > > > 
> > > > > On Sat, Sep 22, 2018 at 2:36 AM Nikolay Izhikov 
> > > 
> > > wrote:
> > > > > 
> > > > > > Hello
> > > > > > 
> > > > > > I've opened a PR [1] for this KIP.
> > > > > > 
> > > > > > [1] https://github.com/apache/kafka/pull/5682
> > > > > > 
> > > > > > John, can you take a look?
> > > > > > 
> > > > > > В Пн, 17/09/2018 в 20:16 +0300, Nikolay Izhikov пишет:
> > > > > > > John,
> > > > > > > 
> > > > > > > Got it.
> > > > > > > 
> > > > > > > Will do my best to meet this deadline.
> > > > > > > 
> > > > > > > В Пн, 17/09/2018 в 11:52 -0500, John Roesler пишет:
> > > > > > > > Yay! Thanks so much for sticking with this Nikolay.
> > > > > > > > 
> > > > > > > > I look forward to your PR!
> > > > > > > > 
> > > > > > > > Not to put pressure on you, but just to let you know, the
> > > 
> > > deadline for
> > > > > > > > getting your pr *merged* for 2.1 is _October 1st_,
> > > > > > > > so you basically have 2 weeks to send the PR, have the reviews,
> > > 
> > > and
> > > > > > 
> > > > > > get it
> > > > > > > > merged.
> > > > > > > > 
> > > > > > > > (see
> > > > > > > > 
> > > > > > 
> > > > > > 
> > > 
> > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044)
> > > > > > > > 
> > > > > > > > Thanks again,
> > > > > > > > -John
> > > > > > > > 
> > > > > > > > On Mon, Sep 17, 2018 at 10:29 AM Nikolay Izhikov <
> > > 
> > > nizhi...@apache.org>
> > > > > > > > wrote:
> > > > > > > > 
> > > > > > > > > This KIP is now accepted with:
> > > > > > > > > - 3 binding +1
> > > > > > > > > - 2 non binding +1
> > > > > > > > > 
> > > > > > > > > Thanks, all.
> > > > > > > > > 
> > > > > > > > > Especially, John, Matthias, Guozhang, Bill, Damian!
> > > > > > > > > 
> > > > > > > > > В Чт, 13/09/2018 в 22:16 -0700, Guozhang Wang пишет:
> > > > > > > > > > +1 (binding), thank you Nikolay!
> > > > > > > > > > 
> > > > > > > > > > Guozhang
> > > > > > > > > > 
> > > > > > > > > > On Thu, Sep 13, 2018 at 9:39 AM, Matthias J. Sax <
> > > > > > 
> > > > > > matth...@confluent.io>
> > > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > > Thanks for the KIP.
> > > > > > > > > > > 
> > > > > > > > > > > +1 (binding)
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > -Matthias
> > > > > > > > > > > 
> > > > > > > > > > > On 9/5/18 8:52 AM, John Roesler wrote:
> > > > > > > > > > > > I'm a +1 (non-binding)
> > > > > > > > > > > > 
> > > > > > > > > > > > On Mon, Sep 3, 2018 at 8:33 AM Nikolay Izhikov <
> > > > > > 
> > > > > > nizhi...@apache.org>
> > > > > > > > > > > 
> > > > > > > > > > > wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > > Dear commiters.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Please, vote on a KIP.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > В Пт, 31/08/2018 в 12:05 -0500, John Roesler пишет:
> > > > > > > > > > > > > > Hi Nikolay,
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > You can start a PR any time, but we cannot per it
> > > 
> > > (and
> > > > > > 
> > > > > > probably
> > > > > > > > > 
> > > > > > > > > won't
> > > > > > > > > > > 
> > > > > > > > > > > do
> > > > > > > > > > > > > > serious reviews) until after the KIP is voted and
> > > 
> > > approved.
> > > > > > > > > > > > 

Build failed in Jenkins: kafka-trunk-jdk10 #615

2018-10-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 4866c33ac309ba5cc098a02948253f55a83666a3
error: Could not read 379211134740268b570fc8edd59ae78df0dffee9
error: Could not read c7eee92ca0fe618da749d636179aacf9bc5b58a2
error: Could not read b74e7e407c0b065adf68bc45042063def922aa10
error: Could not read f26377352d14af38af5d6cf42531b940fafe7236
remote: Enumerating objects: 3637, done.
remote: Counting objects:   0% (1/3637)   remote: Counting objects:   
1% (37/3637)   remote: Counting objects:   2% (73/3637)   
remote: Counting objects:   3% (110/3637)   remote: Counting objects:   
4% (146/3637)   remote: Counting objects:   5% (182/3637)   
remote: Counting objects:   6% (219/3637)   remote: Counting objects:   
7% (255/3637)   remote: Counting objects:   8% (291/3637)   
remote: Counting objects:   9% (328/3637)   remote: Counting objects:  
10% (364/3637)   remote: Counting objects:  11% (401/3637)   
remote: Counting objects:  12% (437/3637)   remote: Counting objects:  
13% (473/3637)   remote: Counting objects:  14% (510/3637)   
remote: Counting objects:  15% (546/3637)   remote: Counting objects:  
16% (582/3637)   remote: Counting objects:  17% (619/3637)   
remote: Counting objects:  18% (655/3637)   remote: Counting objects:  
19% (692/3637)   remote: Counting objects:  20% (728/3637)   
remote: Counting objects:  21% (764/3637)   remote: Counting objects:  
22% (801/3637)   remote: Counting objects:  23% (837/3637)   
remote: Counting objects:  24% (873/3637)   remote: Counting objects:  
25% (910/3637)   remote: Counting objects:  26% (946/3637)   
remote: Counting objects:  27% (982/3637)   remote: Counting objects:  
28% (1019/3637)   remote: Counting objects:  29% (1055/3637)   
remote: Counting objects:  30% (1092/3637)   remote: Counting objects:  
31% (1128/3637)   remote: Counting objects:  32% (1164/3637)   
remote: Counting objects:  33% (1201/3637)   remote: Counting objects:  
34% (1237/3637)   remote: Counting objects:  35% (1273/3637)   
remote: Counting objects:  36% (1310/3637)   remote: Counting objects:  
37% (1346/3637)   remote: Counting objects:  38% (1383/3637)   
remote: Counting objects:  39% (1419/3637)   remote: Counting objects:  
40% (1455/3637)   remote: Counting objects:  41% (1492/3637)   
remote: Counting objects:  42% (1528/3637)   remote: Counting objects:  
43% (1564/3637)   remote: Counting objects:  44% (1601/3637)   
remote: Counting objects:  45% (1637/3637)   remote: Counting objects:  
46% (1674/3637)   remote: Counting objects:  47% (1710/3637)   
remote: Counting objects:  48% (1746/3637)   remote: Counting objects:  
49% (1783/3637)   remote: Counting objects:  50% (1819/3637)   
remote: Counting objects:  51% (1855/3637)   remote

[jira] [Resolved] (KAFKA-4514) Add Codec for ZStandard Compression

2018-10-09 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4514.

   Resolution: Fixed
Fix Version/s: 2.1.0

> Add Codec for ZStandard Compression
> ---
>
> Key: KAFKA-4514
> URL: https://issues.apache.org/jira/browse/KAFKA-4514
> Project: Kafka
>  Issue Type: Improvement
>  Components: compression
>Reporter: Thomas Graves
>Assignee: Lee Dongjin
>Priority: Major
> Fix For: 2.1.0
>
>
> ZStandard: https://github.com/facebook/zstd and 
> http://facebook.github.io/zstd/ has been in use for a while now. v1.0 was 
> recently released. Hadoop 
> (https://issues.apache.org/jira/browse/HADOOP-13578)  and others are adopting 
> it. 
>  We have done some initial trials and seen good results. Zstd seems to give 
> great results => Gzip level Compression with Lz4 level CPU.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-0.10.2-jdk7 #237

2018-10-09 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk10 #616

2018-10-09 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #3092

2018-10-09 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.1-jdk8 #11

2018-10-09 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-09 Thread Joel Koshy
+1
Thanks for the updated KIP.

On Tue, Oct 9, 2018 at 3:28 PM Lucas Wang  wrote:

> Thanks Jun, I've updated the KIP with the new names.
>
> Hi Joel, Becket, Dong, Ismael,
> Since you've reviewed this KIP in the past, can you please review it again?
> Thanks a lot!
>
> Lucas
>
> On Mon, Oct 8, 2018 at 6:10 PM Jun Rao  wrote:
>
>> Hi, Lucas,
>>
>> Yes, the new names sound good to me.
>>
>> Thanks,
>>
>> Jun
>>
>> On Fri, Oct 5, 2018 at 1:12 PM, Lucas Wang  wrote:
>>
>> > Thanks for the suggestion, Ismael. I like it.
>> >
>> > Jun,
>> > I'm excited to get the +1, thanks a lot!
>> > Meanwhile what do you feel about renaming the metrics and config to
>> >
>> > ControlPlaneRequestQueueSize
>> >
>> > ControlPlaneNetworkProcessorIdlePercent
>> >
>> > ControlPlaneRequestHandlerIdlePercent
>> >
>> > control.plane.listener.name
>> >
>> > ?
>> >
>> >
>> > Thanks,
>> >
>> > Lucas
>> >
>> > On Thu, Oct 4, 2018 at 11:38 AM Ismael Juma  wrote:
>> >
>> > > Have we considered control plane if we think control by itself is
>> > > ambiguous? I agree with the original concern that "controller" may be
>> > > confusing for something that affects all brokers.
>> > >
>> > > Ismael
>> > >
>> > >
>> > > On 4 Oct 2018 11:08 am, "Lucas Wang"  wrote:
>> > >
>> > > Thanks Jun. I've changed the KIP with the suggested 2 step upgrade.
>> > > Please take a look again when you have time.
>> > >
>> > > Regards,
>> > > Lucas
>> > >
>> > >
>> > > On Thu, Oct 4, 2018 at 10:06 AM Jun Rao  wrote:
>> > >
>> > > > Hi, Lucas,
>> > > >
>> > > > 200. That's a valid concern. So, we can probably just keep the
>> current
>> > > > name.
>> > > >
>> > > > 201. I am thinking that you would upgrade in the same way as
>> changing
>> > > > inter.broker.listener.name. This requires 2 rounds of rolling
>> restart.
>> > > In
>> > > > the first round, we add the controller endpoint to the listeners w/o
>> > > > setting controller.listener.name. In the second round, every broker
>> > sets
>> > > > controller.listener.name. At that point, the controller listener is
>> > > ready
>> > > > in every broker.
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Jun
>> > > >
>> > > > On Tue, Oct 2, 2018 at 10:38 AM, Lucas Wang 
>> > > wrote:
>> > > >
>> > > > > Thanks for the further comments, Jun.
>> > > > >
>> > > > > 200. Currently in the code base, we have the term of
>> "ControlBatch"
>> > > > related
>> > > > > to
>> > > > > idempotent/transactional producing. Do you think it's a concern
>> for
>> > > > reusing
>> > > > > the term "control"?
>> > > > >
>> > > > > 201. It's not clear to me how it would work by following the same
>> > > > strategy
>> > > > > for "controller.listener.name".
>> > > > > Say the new controller has its "controller.listener.name" set to
>> the
>> > > > value
>> > > > > "CONTROLLER", and broker 1
>> > > > > has picked up this KIP by announcing
>> > > > > "endpoints": [
>> > > > > "CONTROLLER://broker1.example.com:9091",
>> > > > > "INTERNAL://broker1.example.com:9092",
>> > > > > "EXTERNAL://host1.example.com:9093"
>> > > > > ],
>> > > > >
>> > > > > while broker2 has not picked up the change, and is announcing
>> > > > > "endpoints": [
>> > > > > "INTERNAL://broker2.example.com:9092",
>> > > > > "EXTERNAL://host2.example.com:9093"
>> > > > > ],
>> > > > > to support both broker 1 for the new behavior and broker 2 for the
>> > old
>> > > > > behavior, it seems the controller must
>> > > > > check their published endpoints. Am I missing something?
>> > > > >
>> > > > > Thanks!
>> > > > > Lucas
>> > > > >
>> > > > > On Mon, Oct 1, 2018 at 6:29 PM Jun Rao  wrote:
>> > > > >
>> > > > > > Hi, Lucas,
>> > > > > >
>> > > > > > Sorry for the delay. The updated wiki looks good to me overall.
>> > Just
>> > > a
>> > > > > > couple more minor comments.
>> > > > > >
>> > > > > > 200.
>> > > kafka.network:name=ControllerRequestQueueSize,type=RequestChannel:
>> > > > > The
>> > > > > > name ControllerRequestQueueSize gives the impression that it's
>> only
>> > > for
>> > > > > the
>> > > > > > controller broker. Perhaps we can just rename all metrics and
>> > configs
>> > > > > from
>> > > > > > controller to control. This indicates that the threads and the
>> > queues
>> > > > are
>> > > > > > for the control requests (as oppose to data requests).
>> > > > > >
>> > > > > > 201. ": In this scenario, the
>> > controller
>> > > > will
>> > > > > > have the "controller.listener.name" config set to a value like
>> > > > > > "CONTROLLER", however the broker's exposed endpoints do not
>> have an
>> > > > entry
>> > > > > > corresponding to the new listener name. Hence the controller
>> should
>> > > > > > preserve the existing behavior by determining the endpoint using
>> > > > > > *inter-broker-listener-name *value. The end result should be the
>> > same
>> > > > > > behavior as today." Currently, the controller makes connections
>> > based
>> > > > on
>> > > > > > its local inter.broker.listener.name config without check

Build failed in Jenkins: kafka-trunk-jdk10 #617

2018-10-09 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-4514; Add Codec for ZStandard Compression (#2267)

--
[...truncated 2.35 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.Topolog

Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-09 Thread Dong Lin
Hey Lucas,

Thanks for the KIP. Looks good overall. +1

I have two trivial comments which may be a bit useful to reader.

- Can we include the default value for the new config in Public Interface
section? Typically the default value of the new config is an important part
of public interface and we usually specify it in the KIP's public interface
section.
- Can we change "whose default capacity is 20" to  "whose capacity is 20"
in the section "How are controller requests handled over the dedicated
connections"? The use of word "default" seems to suggest that this is
configurable.

Thanks,
Dong

On Mon, Jun 18, 2018 at 1:04 PM Lucas Wang  wrote:

> Hi All,
>
> I've addressed a couple of comments in the discussion thread for KIP-291,
> and
> got no objections after making the changes. Therefore I would like to start
> the voting thread.
>
> KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-291%3A+Have+separate+queues+for+control+requests+and+data+requests
>
> Thanks for your time!
> Lucas
>


Re: [VOTE] KIP-371: Add a configuration to build custom SSL principal name

2018-10-09 Thread Manikumar
Hi All,

The vote has passed with 3 binding votes (Harsha, Rajini, Jun) and 2
non-binding votes (Priyank, Satish).

Thanks everyone for the votes.

Thanks,
Manikumar

On Wed, Oct 10, 2018 at 1:36 AM Jun Rao  wrote:

> Hi, Mani,
>
> Thanks for the KIP. +1 from me.
>
> Jun
>
> On Wed, Sep 19, 2018 at 5:19 AM, Manikumar 
> wrote:
>
> > Hi All,
> >
> > I would like to start voting on KIP-371, which adds a configuration
> option
> > for building custom SSL principal names.
> >
> > KIP:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
> >
> > Discussion Thread:
> > https://lists.apache.org/thread.html/e346f5e3e3dd1feb863594e40eac1e
> > d54138613a667f319b99344710@%3Cdev.kafka.apache.org%3E
> >
> > Thanks,
> > Manikumar
> >
>


[DISCUSS] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-10-09 Thread Patrick Huang
Hi All,

Please find the below KIP which proposes the concept of broker generation to 
resolve issues caused by controller missing broker state changes and broker 
processing outdated control requests.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-380%3A+Detect+outdated+control+requests+and+bounced+brokers+using+broker+generation

All comments are appreciated.

Best,
Zhanxiang (Patrick) Huang