Build failed in Jenkins: kafka-trunk-jdk8 #4439

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[jiangjie.qj] KAFKA-9703; Free up compression buffer after splitting a large 
batch

[github] KAFKA-9838; Add log concurrency test and fix minor race condition


--
[...truncated 3.02 MB...]
org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task 

Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-04-15 Thread Boyang Chen
Thanks Raymond and Colin for the detailed discussions! I totally agree
with the rational here. The new `Envelope` RPC has been added to the KIP
and the forwarding section logic has been revised, feel free to take
another look.

On Wed, Apr 15, 2020 at 5:19 PM Colin McCabe  wrote:

> Hi Boyang,
>
> I agree that we need a version bump on the request types we are going to
> forward.  The new versions will be able to return the NOT_CONTROLLER error,
> and let the client do the retrying, which is what we typically prefer.
> The  existing versions can't ever return NOT_CONTROLLER.
>
> Since we have to have a new version for all these requests, we could
> technically do everything with just optional fields, like we originally
> discussed.  However, there is probably some value in having a real
> EnvelopeRequest (or whatever) that makes it clearer what is going on.
> Optional fields don't come with "guard rails" to prevent us from
> accidentally ignoring them on older versions of the broker.  A new ApiKey
> certainly does.
>
> Another issue is that it's nice to avoid changing the version of the
> request when forwarding it.  Sometimes different versions have slightly
> different semantics, and it simplifies things to avoid worrying about that.
>
> We should restrict the use of forwarding to just principals that have
> CLUSTERACTION on CLUSTER for now, so that only the brokers and superusers
> can do it.
>
> best,
> Colin
>
>
> On Tue, Apr 14, 2020, at 13:15, Boyang Chen wrote:
> > Thanks Raymond for the proposal! Yea, adding a unified forwarding
> envelope
> > request is a good idea, but it doesn't really solve our problem in this
> KIP.
> >
> > On Mon, Apr 13, 2020 at 11:14 PM Raymond Ng  wrote:
> >
> > > Hi Boyang,
> > >
> > > Thanks for the KIP. Overall looks great.
> > >
> > > One suggestion: instead of bumping the version and adding an optional
> field
> > > (PrincipalName) for a number of requests, can we consider adding a
> general
> > > ProxyRequest that acts as an "envelope" for the forwarded requests?
> > >
> > > A few advantages to this approach come to mind:
> > >
> > >1. Add one (new) Request API instead of modifying a number of them
> > >2. Make the forwarded nature of the request explicit instead of
> > >implicitly relying on an optional field and a specific version that
> > > varies
> > >by type.
> > >3. This approach is flexible enough to be potentially useful beyond
> the
> > >current use case (e.g. federated, inter-cluster scenarios)
> > >
> > > As a bonus, the combination of 1. and 2. should also simplify
> > > implementation & validation.
> > >
> > >
> > Firstly the broker has to differentiate old and new admin clients as it
> > should only support forwarding for old ones. Without a version bump,
> broker
> > couldn't differentiate both. Besides the bumping of the existing
> > protocol is not a big overhead comparing with adding a new RPC, so I
> don't
> > worry too much about the complexity here.
> >
> >
> > > On the other hand, it's not clear if the underlying RPC request
> > > encoding/decode machinery supports embedded requests. Hopefully, even
> if it
> > > doesn't it would not be too difficult to extend.
> > >
> >
> > Making the forwarding behavior more general is great, but could also come
> > with problems we couldn't anticipate such as usage abusiveness, more
> > changes to auto generation framework and increased metadata overhead. At
> > the moment, we don't expect the direct forwarding would be a bottleneck,
> so
> > I'm more inclined to make this proposal as simple as possible for now. If
> > we do have a strong need in the future, getting the ProxyRequest is
> > definitely worth the effort.
> >
> >
> > > What do you think?
> > >
> > > Regards,
> > > Ray
> > >
> > >
> > > On Wed, Apr 8, 2020 at 4:36 PM Boyang Chen  >
> > > wrote:
> > >
> > > > Thanks for the info Agam! Will add to the KIP.
> > > >
> > > > On Wed, Apr 8, 2020 at 4:26 PM Agam Brahma 
> wrote:
> > > >
> > > > > Hi Boyang,
> > > > >
> > > > > The KIP already talks about incorporating changes for
> FindCoordinator
> > > > > request routing, wanted to point out one additional case where
> internal
> > > > > topics are created "as a side effect":
> > > > >
> > > > > As part of handling metadata requests, if we are looking for
> metadata
> > > for
> > > > > an internal topic and auto-topic-creation is enabled [1], the
> broker
> > > > > currently goes ahead and creates the internal topic in the same
> way [2]
> > > > as
> > > > > it would for the FindCoordinator request.
> > > > >
> > > > > -Agam
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1096
> > > > > [2]
> > > > >
> > > > >
> > > >
> > >
> https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1041
> > > > >
> > > > >
> > > > >
> > 

Build failed in Jenkins: kafka-trunk-jdk11 #1358

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9864: Avoid expensive QuotaViolationException usage (#8477)

[github] MINOR: Serialize state change logs for handling LeaderAndIsr and

[github] KAFKA-9832: extend Kafka Streams EOS system test (#8440)

[github] MINOR: add process(Test)Messages to the README (#8480)

[github] HOTFIX: don't close or wipe out someone else's state (#8478)

[github] MINOR: KafkaApis#handleOffsetDeleteRequest does not group result

[github] KAFKA-7885: TopologyDescription violates equals-hashCode contract.

[github] KAFKA-9779: Add Stream system test for 2.5 release (#8378)

[jiangjie.qj] KAFKA-9703; Free up compression buffer after splitting a large 
batch

[github] KAFKA-9838; Add log concurrency test and fix minor race condition


--
[...truncated 6.09 MB...]
org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task 

Re: [ANNOUNCE] Apache Kafka 2.5.0

2020-04-15 Thread Navinder Brar
Thanks for running the release David. Congratulations to everyone involved.

-Navinder Pal Singh Brar
 

On Thursday, 16 April, 2020, 07:26:25 am IST, Konstantine Karantasis 
 wrote:  
 
 Thanks for driving the release David and congrats to all the contributors.
Nicely done!

Konstantine

On Wed, Apr 15, 2020 at 3:16 PM Randall Hauch  wrote:

> Thanks, David!
>
> Congratulations to the whole AK community, and thanks to everyone that
> contributed!
>
> On Wed, Apr 15, 2020 at 4:47 PM Sönke Liebau
>  wrote:
>
> > Thanks David!!
> >
> >
> > On Wed, 15 Apr 2020 at 23:07, Bill Bejeck  wrote:
> >
> > > David,
> > >
> > > Thanks for running the release!
> > >
> > > -Bill
> > >
> > > On Wed, Apr 15, 2020 at 4:45 PM Matthias J. Sax 
> > wrote:
> > >
> > > > Thanks for running the release David!
> > > >
> > > > -Matthias
> > > >
> > > > On 4/15/20 1:15 PM, David Arthur wrote:
> > > > > The Apache Kafka community is pleased to announce the release for
> > > Apache
> > > > > Kafka 2.5.0
> > > > >
> > > > > This release includes many new features, including:
> > > > >
> > > > > * TLS 1.3 support (1.2 is now the default)
> > > > > * Co-groups for Kafka Streams
> > > > > * Incremental rebalance for Kafka Consumer
> > > > > * New metrics for better operational insight
> > > > > * Upgrade Zookeeper to 3.5.7
> > > > > * Deprecate support for Scala 2.11
> > > > >
> > > > > All of the changes in this release can be found in the release
> notes:
> > > > > https://www.apache.org/dist/kafka/2.5.0/RELEASE_NOTES.html
> > > > >
> > > > >
> > > > > You can download the source and binary release (Scala 2.12 and
> 2.13)
> > > > from:
> > > > > https://kafka.apache.org/downloads#2.5.0
> > > > >
> > > > >
> > > >
> > >
> >
> ---
> > > > >
> > > > >
> > > > > Apache Kafka is a distributed streaming platform with four core
> APIs:
> > > > >
> > > > >
> > > > > ** The Producer API allows an application to publish a stream
> records
> > > to
> > > > > one or more Kafka topics.
> > > > >
> > > > > ** The Consumer API allows an application to subscribe to one or
> more
> > > > > topics and process the stream of records produced to them.
> > > > >
> > > > > ** The Streams API allows an application to act as a stream
> > processor,
> > > > > consuming an input stream from one or more topics and producing an
> > > > > output stream to one or more output topics, effectively
> transforming
> > > the
> > > > > input streams to output streams.
> > > > >
> > > > > ** The Connector API allows building and running reusable producers
> > or
> > > > > consumers that connect Kafka topics to existing applications or
> data
> > > > > systems. For example, a connector to a relational database might
> > > > > capture every change to a table.
> > > > >
> > > > >
> > > > > With these APIs, Kafka can be used for two broad classes of
> > > application:
> > > > >
> > > > > ** Building real-time streaming data pipelines that reliably get
> data
> > > > > between systems or applications.
> > > > >
> > > > > ** Building real-time streaming applications that transform or
> react
> > > > > to the streams of data.
> > > > >
> > > > >
> > > > > Apache Kafka is in use at large and small companies worldwide,
> > > including
> > > > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> > > Rabobank,
> > > > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > > >
> > > > > A big thank you for the following 108 contributors to this release!
> > > > >
> > > > > A. Sophie Blee-Goldman, Adam Bellemare, Alaa Zbair, Alex Kokachev,
> > Alex
> > > > > Leung, Alex Mironov, Alice, Andrew Olson, Andy Coates, Anna
> Povzner,
> > > > Antony
> > > > > Stubbs, Arvind Thirunarayanan, belugabehr, bill, Bill Bejeck, Bob
> > > > Barrett,
> > > > > Boyang Chen, Brian Bushree, Brian Byrne, Bruno Cadonna, Bryan Ji,
> > > > Chia-Ping
> > > > > Tsai, Chris Egerton, Chris Pettitt, Chris Stromberger, Colin P.
> > Mccabe,
> > > > > Colin Patrick McCabe, commandini, Cyrus Vafadari, Dae-Ho Kim, David
> > > > Arthur,
> > > > > David Jacot, David Kim, David Mao, dengziming, Dhruvil Shah,
> Edoardo
> > > > Comar,
> > > > > Eduardo Pinto, Fábio Silva, gkomissarov, Grant Henke, Greg Harris,
> > > Gunnar
> > > > > Morling, Guozhang Wang, Harsha Laxman, high.lee, highluck, Hossein
> > > > Torabi,
> > > > > huxi, huxihx, Ismael Juma, Ivan Yurchenko, Jason Gustafson,
> > jiameixie,
> > > > John
> > > > > Roesler, José Armando García Sancio, Jukka Karvanen, Karan Kumar,
> > Kevin
> > > > Lu,
> > > > > Konstantine Karantasis, Lee Dongjin, Lev Zemlyanov, Levani
> > Kokhreidze,
> > > > > Lucas Bradstreet, Manikumar Reddy, Mathias Kub, Matthew Wong,
> > Matthias
> > > J.
> > > > > Sax, Michael Gyarmathy, Michael Viamari, Mickael Maison, Mitch,
> > > > > mmanna-sapfgl, NanerLee, Narek Karapetian, Navinder Pal Singh Brar,
> > > > > nicolasguyomar, Nigel Liang, NIkhil Bhatia, Nikolay, 

Re: [DISCUSS] KIP-594: Expose output topic names from TopologyTestDriver

2020-04-15 Thread John Roesler
Hi Andy,

Thanks for the KIP!

To Matthias’s concern, I have to agree that the motivation doesn’t build a 
particularly strong case for the KIP, and people often look back to these 
documents to understand why something was done, and done a particular way. So, 
it would be nice to flesh it out a bit more. I have some thoughts below on how 
to do this. 

Matthias also raised a good point that many applications can get the same 
information just by looking in the TopologyDescription. At the least, this 
alternative deserves to be called out in the “Rejected Alternatives” section, 
with an explanation of why the chosen approach is preferable.

For my part, I can think of a couple of reasons to add the method to 
TopologyTestDriver instead of TopologyDescription.

First, although it’s rare, some output topics are determined dynamically by the 
data. In particular, we offer sink nodes with a TopicNameExtractor. Such topics 
could only be verified by looking at the actual topics used, rather than the 
ones specified in the topology.

Second is a more philosophical justification, that the TopologyTestDriver 
performs some functions of KafkaStreams (processing, serving IQ), but also some 
functions of the brokers (simulating input and output topics). A common 
integration testing strategy is to run an application and then query the 
brokers for the list of topics that the app creates/uses, to make sure the app 
won’t run afoul of ACLs in production, or just to make sure it doesn’t contain 
a bug that produces to unexpected topics, or any number of other reasons. Your 
proposal allows the TTD to partly fill this role, which seems natural 
considering its present broker-esque capabilities.

As for my own feedback, I’m mostly concerned about the method name and 
contract. The proposed name mentions “output” topics, but I think most people 
would associate this with sink topics, probably not with repartition topics, 
and certainly not with changelog topics. I’m sure this also crossed your mind, 
and I’m afraid I don’t have a brilliant suggestion. Maybe 
getProducedTopicNames? Or, we could include the input topics as well and go 
with getTopicNames() as a listing of all the topics that the app “touches”?

One other point, I’m actually mildly concerned about what you say regarding the 
outputs to the repartition and changelog topics being captured and verified 
later. I would consider the data in these topics to be a private interface, and 
would strongly discourage user code to depend on it in any way, including in 
tests. The reason is that we need to maintain the flexibility to change how 
those topics are used at any time, and frequently do to implement features, fix 
bugs, etc. if we needed a KIP ad deprecation period for that data, we would be 
in for a lot of trouble. The only reason I wouldn’t consider actually removing 
those topics from TTD is that they’re exposed in the brokers anyway. But seeing 
this in the motivation gives me a little heartburn. 

Finally, a minor point: the Javadoc you proposed contradicts your proposal for 
Phase 2, in that the doc says it prints all the topics to which records have 
been output, but Phase 2 says we’ll include all the topics from the description 
up front, before any outputs happen. 

Anyway, thanks again for the KIP. It does seem useful to me, and I hope my 
feedback helps speed you to a successful vote!

Thanks,
John

On Tue, Apr 14, 2020, at 19:49, Matthias J. Sax wrote:
> Andy,
> 
> thanks for the KIP. The motivation is a little unclear to me:
> 
> > This information allows all the outputs of a test run to be captured 
> > without prior knowledge of the output topics.
> 
> Given that the TTD users writes the `Topology` themselves, they should
> always have prior knowledge what output topics they use. So why would
> this be useful?
> 
> Also, there is `Topology#describe()` to get all topic names (even the
> name of internal topics -- to be fair, changelog topic names are not
> exposed, only store names, but those can we used to infer changelog
> topic names, too).
> 
> Can you elaborate about the motivation? So far, it's not convincing to me.
> 
> 
> 
> -Matthias
> 
> 
> On 4/14/20 8:09 AM, Andy Coates wrote:
> > Hey all,
> > I would like to start off the discussion for KIP-594:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver
> > 
> > This KIP proposes to expose the names of the topics a topology produces
> > records during a test run from the TopologyTestDriver class.
> > 
> > Let me know your thoughts!
> > Andy
> > 
> 
> 
> Attachments:
> * signature.asc


Jenkins build is back to normal : kafka-trunk-jdk8 #4438

2020-04-15 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-596 Safely abort Producer transactions during application shutdown

2020-04-15 Thread 张祥
Hi everyone,

I have opened a small KIP about safely aborting transaction during
shutdown. I'd like to use this thread to discuss about it and any feedback
is appreciated (sorry for earlier KIP number mistake). Here is a link to
KIP-596 :
https://cwiki.apache.org/confluence/display/KAFKA/KIP-596%3A+Safely+abort+Producer+transactions+during+application+shutdown

Thank you!


Re: [ANNOUNCE] Apache Kafka 2.5.0

2020-04-15 Thread Konstantine Karantasis
Thanks for driving the release David and congrats to all the contributors.
Nicely done!

Konstantine

On Wed, Apr 15, 2020 at 3:16 PM Randall Hauch  wrote:

> Thanks, David!
>
> Congratulations to the whole AK community, and thanks to everyone that
> contributed!
>
> On Wed, Apr 15, 2020 at 4:47 PM Sönke Liebau
>  wrote:
>
> > Thanks David!!
> >
> >
> > On Wed, 15 Apr 2020 at 23:07, Bill Bejeck  wrote:
> >
> > > David,
> > >
> > > Thanks for running the release!
> > >
> > > -Bill
> > >
> > > On Wed, Apr 15, 2020 at 4:45 PM Matthias J. Sax 
> > wrote:
> > >
> > > > Thanks for running the release David!
> > > >
> > > > -Matthias
> > > >
> > > > On 4/15/20 1:15 PM, David Arthur wrote:
> > > > > The Apache Kafka community is pleased to announce the release for
> > > Apache
> > > > > Kafka 2.5.0
> > > > >
> > > > > This release includes many new features, including:
> > > > >
> > > > > * TLS 1.3 support (1.2 is now the default)
> > > > > * Co-groups for Kafka Streams
> > > > > * Incremental rebalance for Kafka Consumer
> > > > > * New metrics for better operational insight
> > > > > * Upgrade Zookeeper to 3.5.7
> > > > > * Deprecate support for Scala 2.11
> > > > >
> > > > > All of the changes in this release can be found in the release
> notes:
> > > > > https://www.apache.org/dist/kafka/2.5.0/RELEASE_NOTES.html
> > > > >
> > > > >
> > > > > You can download the source and binary release (Scala 2.12 and
> 2.13)
> > > > from:
> > > > > https://kafka.apache.org/downloads#2.5.0
> > > > >
> > > > >
> > > >
> > >
> >
> ---
> > > > >
> > > > >
> > > > > Apache Kafka is a distributed streaming platform with four core
> APIs:
> > > > >
> > > > >
> > > > > ** The Producer API allows an application to publish a stream
> records
> > > to
> > > > > one or more Kafka topics.
> > > > >
> > > > > ** The Consumer API allows an application to subscribe to one or
> more
> > > > > topics and process the stream of records produced to them.
> > > > >
> > > > > ** The Streams API allows an application to act as a stream
> > processor,
> > > > > consuming an input stream from one or more topics and producing an
> > > > > output stream to one or more output topics, effectively
> transforming
> > > the
> > > > > input streams to output streams.
> > > > >
> > > > > ** The Connector API allows building and running reusable producers
> > or
> > > > > consumers that connect Kafka topics to existing applications or
> data
> > > > > systems. For example, a connector to a relational database might
> > > > > capture every change to a table.
> > > > >
> > > > >
> > > > > With these APIs, Kafka can be used for two broad classes of
> > > application:
> > > > >
> > > > > ** Building real-time streaming data pipelines that reliably get
> data
> > > > > between systems or applications.
> > > > >
> > > > > ** Building real-time streaming applications that transform or
> react
> > > > > to the streams of data.
> > > > >
> > > > >
> > > > > Apache Kafka is in use at large and small companies worldwide,
> > > including
> > > > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> > > Rabobank,
> > > > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > > >
> > > > > A big thank you for the following 108 contributors to this release!
> > > > >
> > > > > A. Sophie Blee-Goldman, Adam Bellemare, Alaa Zbair, Alex Kokachev,
> > Alex
> > > > > Leung, Alex Mironov, Alice, Andrew Olson, Andy Coates, Anna
> Povzner,
> > > > Antony
> > > > > Stubbs, Arvind Thirunarayanan, belugabehr, bill, Bill Bejeck, Bob
> > > > Barrett,
> > > > > Boyang Chen, Brian Bushree, Brian Byrne, Bruno Cadonna, Bryan Ji,
> > > > Chia-Ping
> > > > > Tsai, Chris Egerton, Chris Pettitt, Chris Stromberger, Colin P.
> > Mccabe,
> > > > > Colin Patrick McCabe, commandini, Cyrus Vafadari, Dae-Ho Kim, David
> > > > Arthur,
> > > > > David Jacot, David Kim, David Mao, dengziming, Dhruvil Shah,
> Edoardo
> > > > Comar,
> > > > > Eduardo Pinto, Fábio Silva, gkomissarov, Grant Henke, Greg Harris,
> > > Gunnar
> > > > > Morling, Guozhang Wang, Harsha Laxman, high.lee, highluck, Hossein
> > > > Torabi,
> > > > > huxi, huxihx, Ismael Juma, Ivan Yurchenko, Jason Gustafson,
> > jiameixie,
> > > > John
> > > > > Roesler, José Armando García Sancio, Jukka Karvanen, Karan Kumar,
> > Kevin
> > > > Lu,
> > > > > Konstantine Karantasis, Lee Dongjin, Lev Zemlyanov, Levani
> > Kokhreidze,
> > > > > Lucas Bradstreet, Manikumar Reddy, Mathias Kub, Matthew Wong,
> > Matthias
> > > J.
> > > > > Sax, Michael Gyarmathy, Michael Viamari, Mickael Maison, Mitch,
> > > > > mmanna-sapfgl, NanerLee, Narek Karapetian, Navinder Pal Singh Brar,
> > > > > nicolasguyomar, Nigel Liang, NIkhil Bhatia, Nikolay, ning2008wisc,
> > > Omkar
> > > > > Mestry, Rajini Sivaram, Randall Hauch, ravowlga123, Raymond Ng, Ron
> > > > > Dagostino, sainath batthala, Sanjana Kaundinya, Scott, Seungha Lee,
> > > 

Re: [DISCUSS] KIP-594 Safely abort Producer transactions during application shutdown

2020-04-15 Thread 张祥
Thanks for reminding, I will change my KIP number and start a new thread
now.

Kowshik Prakasam  于2020年4月16日周四 上午9:49写道:

> Hi,
>
> It appears "KIP-594" is already taken. Please see this existing link:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver
>  .
> To avoid a duplicate, please change your KIP number to pick the next
> available KIP number, as mentioned here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals#KafkaImprovementProposals-KIPround-up
>
> Once you are done, I'd suggest that you please start a separate discussion
> thread with the new KIP number.
>
>
> Cheers,
> Kowshik
>
>
> On Wed, Apr 15, 2020 at 6:42 PM 张祥  wrote:
>
> > Hi everyone,
> >
> > I have opened a small KIP about safely aborting transaction during
> > shutdown. I'd like to use this thread to discuss about it and any
> feedback
> > is appreciated. Here is a link to KIP-594 :
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Safely+abort+Producer+transactions+during+application+shutdown
> >
> > Thank you!
> >
>


Re: [DISCUSS] KIP-594 Safely abort Producer transactions during application shutdown

2020-04-15 Thread Kowshik Prakasam
Hi,

It appears "KIP-594" is already taken. Please see this existing link:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver
 .
To avoid a duplicate, please change your KIP number to pick the next
available KIP number, as mentioned here:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals#KafkaImprovementProposals-KIPround-up

Once you are done, I'd suggest that you please start a separate discussion
thread with the new KIP number.


Cheers,
Kowshik


On Wed, Apr 15, 2020 at 6:42 PM 张祥  wrote:

> Hi everyone,
>
> I have opened a small KIP about safely aborting transaction during
> shutdown. I'd like to use this thread to discuss about it and any feedback
> is appreciated. Here is a link to KIP-594 :
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Safely+abort+Producer+transactions+during+application+shutdown
>
> Thank you!
>


[DISCUSS] KIP-594 Safely abort Producer transactions during application shutdown

2020-04-15 Thread 张祥
Hi everyone,

I have opened a small KIP about safely aborting transaction during
shutdown. I'd like to use this thread to discuss about it and any feedback
is appreciated. Here is a link to KIP-594 :
https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Safely+abort+Producer+transactions+during+application+shutdown

Thank you!


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi Jun,

Thanks for the feedback! I have addressed the comments in the KIP.

> 200. In the validation section, there is still the text  "*from*
> {"max_version_level":
> X} *to* {"max_version_level": X’}". It seems that it should say "from X to
> Y"?

(Kowshik): Done. I have reworded it a bit to make it clearer now in this
section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations

> 110. Could we add that we need to document the bumped version of each
> feature in the upgrade section of a release?

(Kowshik): Great point! Done, I have mentioned it in #3 this section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584

%3A+Versioning+scheme+for+features#KIP-584

:Versioningschemeforfeatures-Whentouseversionedfeatureflags?


Cheers,
Kowshik

On Wed, Apr 15, 2020 at 4:00 PM Jun Rao  wrote:

> Hi, Kowshik,
>
> Looks good to me now. Just a couple of minor things below.
>
> 200. In the validation section, there is still the text  "*from*
> {"max_version_level":
> X} *to* {"max_version_level": X’}". It seems that it should say "from X to
> Y"?
>
> 110. Could we add that we need to document the bumped version of each
> feature in the upgrade section of a release?
>
> Thanks,
>
> Jun
>
> On Wed, Apr 15, 2020 at 1:08 PM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Thank you for the suggestion! I have updated the KIP, please find my
> > response below.
> >
> > > 200. I guess you are saying only when the allowDowngrade field is set,
> > the
> > > finalized feature version can go backward. Otherwise, it can only go
> up.
> > > That makes sense. It would be useful to make that clear when explaining
> > > the usage of the allowDowngrade field. In the validation section, we
> > have  "
> > > /features' from {"max_version_level": X} to {"max_version_level": X’}",
> > it
> > > seems that we need to mention Y there.
> >
> > (Kowshik): Great point! Yes, that is correct. Done, I have updated the
> > validations
> > section explaining the above. Here is a link to this section:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> >
> >
> > On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:
> >
> > > Hi, Kowshik,
> > >
> > > 200. I guess you are saying only when the allowDowngrade field is set,
> > the
> > > finalized feature version can go backward. Otherwise, it can only go
> up.
> > > That makes sense. It would be useful to make that clear when explaining
> > > the usage of the allowDowngrade field. In the validation section, we
> have
> > > "
> > > /features' from {"max_version_level": X} to {"max_version_level": X’}",
> > it
> > > seems that we need to mention Y there.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam <
> > kpraka...@confluent.io>
> > > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Great question! Please find my response below.
> > > >
> > > > > 200. My understanding is that If the CLI tool passes the
> > > > > '--allow-downgrade' flag when updating a specific feature, then a
> > > future
> > > > > downgrade is possible. Otherwise, the feature is now downgradable.
> If
> > > so,
> > > > I
> > > > > was wondering how the controller remembers this since it can be
> > > restarted
> > > > > over time?
> > > >
> > > > (Kowshik): The purpose of the flag was to just restrict the user
> intent
> > > for
> > > > a specific request.
> > > > It seems to me that to avoid confusion, I could call the flag as
> > > > `--try-downgrade` instead.
> > > > Then this makes it clear, that, the controller just has to consider
> the
> > > ask
> > > > from
> > > > the user as an explicit request to attempt a downgrade.
> > > >
> > > > The flag does not act as an override on controller's decision making
> > that
> > > > decides whether
> > > > a flag is downgradable (these decisions on whether to allow a flag to
> > be
> > > > downgraded
> > > > from a specific version level, can be embedded in the controller
> code).
> > > >
> > > > Please let me know what you think.
> > > > Sorry if I misunderstood the original question.
> > > >
> > > >
> > > > Cheers,
> > > > Kowshik
> > > >
> > > >
> > > > On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:
> > > >
> > > > > Hi, Kowshik,
> > > > >
> > > > > Thanks for the reply. Makes sense. Just one more question.
> > > > >
> > > > > 200. My understanding is that If the CLI tool passes the
> > > > > '--allow-downgrade' flag when updating a specific feature, then a
> > > future
> > > > > downgrade is possible. Otherwise, the feature is now downgradable.
> If
> > > > so, I
> > > > > was wondering how the controller remembers this since it can be
> > > restarted
> > > > > over time?
> > > > >
> > > > > Jun
> > > > >
> > > > >
> 

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi Jun,

Sorry the links were broken in my last response, here are the right links:

200. 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioning
Scheme For Features-Validations

110. 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-When
To Use Versioned Feature Flags?



Cheers,
Kowshik

On Wed, Apr 15, 2020 at 6:24 PM Kowshik Prakasam 
wrote:

>
> Hi Jun,
>
> Thanks for the feedback! I have addressed the comments in the KIP.
>
> > 200. In the validation section, there is still the text  "*from*
> > {"max_version_level":
> > X} *to* {"max_version_level": X’}". It seems that it should say "from X
> to
> > Y"?
>
> (Kowshik): Done. I have reworded it a bit to make it clearer now in this
> section:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
>
> > 110. Could we add that we need to document the bumped version of each
> > feature in the upgrade section of a release?
>
> (Kowshik): Great point! Done, I have mentioned it in #3 this section:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
> 
> %3A+Versioning+scheme+for+features#KIP-584
> 
> :Versioningschemeforfeatures-Whentouseversionedfeatureflags?
>
>
> Cheers,
> Kowshik
>
> On Wed, Apr 15, 2020 at 4:00 PM Jun Rao  wrote:
>
>> Hi, Kowshik,
>>
>> Looks good to me now. Just a couple of minor things below.
>>
>> 200. In the validation section, there is still the text  "*from*
>> {"max_version_level":
>> X} *to* {"max_version_level": X’}". It seems that it should say "from X to
>> Y"?
>>
>> 110. Could we add that we need to document the bumped version of each
>> feature in the upgrade section of a release?
>>
>> Thanks,
>>
>> Jun
>>
>> On Wed, Apr 15, 2020 at 1:08 PM Kowshik Prakasam 
>> wrote:
>>
>> > Hi Jun,
>> >
>> > Thank you for the suggestion! I have updated the KIP, please find my
>> > response below.
>> >
>> > > 200. I guess you are saying only when the allowDowngrade field is set,
>> > the
>> > > finalized feature version can go backward. Otherwise, it can only go
>> up.
>> > > That makes sense. It would be useful to make that clear when
>> explaining
>> > > the usage of the allowDowngrade field. In the validation section, we
>> > have  "
>> > > /features' from {"max_version_level": X} to {"max_version_level":
>> X’}",
>> > it
>> > > seems that we need to mention Y there.
>> >
>> > (Kowshik): Great point! Yes, that is correct. Done, I have updated the
>> > validations
>> > section explaining the above. Here is a link to this section:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>> >
>> >
>> >
>> > On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:
>> >
>> > > Hi, Kowshik,
>> > >
>> > > 200. I guess you are saying only when the allowDowngrade field is set,
>> > the
>> > > finalized feature version can go backward. Otherwise, it can only go
>> up.
>> > > That makes sense. It would be useful to make that clear when
>> explaining
>> > > the usage of the allowDowngrade field. In the validation section, we
>> have
>> > > "
>> > > /features' from {"max_version_level": X} to {"max_version_level":
>> X’}",
>> > it
>> > > seems that we need to mention Y there.
>> > >
>> > > Thanks,
>> > >
>> > > Jun
>> > >
>> > > On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam <
>> > kpraka...@confluent.io>
>> > > wrote:
>> > >
>> > > > Hi Jun,
>> > > >
>> > > > Great question! Please find my response below.
>> > > >
>> > > > > 200. My understanding is that If the CLI tool passes the
>> > > > > '--allow-downgrade' flag when updating a specific feature, then a
>> > > future
>> > > > > downgrade is possible. Otherwise, the feature is now
>> downgradable. If
>> > > so,
>> > > > I
>> > > > > was wondering how the controller remembers this since it can be
>> > > restarted
>> > > > > over time?
>> > > >
>> > > > (Kowshik): The purpose of the flag was to just restrict the user
>> intent
>> > > for
>> > > > a specific request.
>> > > > It seems to me that to avoid confusion, I could call the flag as
>> > > > `--try-downgrade` instead.
>> > > > Then this makes it clear, that, the controller just has to consider
>> the
>> > > ask
>> > > > from
>> > > > the user as an explicit request to attempt a downgrade.
>> > > >
>> > > > The flag does not act as an override on controller's decision making
>> > that
>> > > > 

Build failed in Jenkins: kafka-2.5-jdk8 #95

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[github] Kafka 9739: Fix for 2.5 branch (#8492)

[matthias] KAFKA-9675: Fix bug that prevents RocksDB metrics to be updated 
(#8256)

[matthias] KAFKA-9540: Move "Could not find the standby task while closing it" 
log


--
[...truncated 2.91 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 

Build failed in Jenkins: kafka-trunk-jdk11 #1357

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[gwen] MINOR: avoid autoboxing in FetchRequest.PartitionData.equals

[github] MINOR: Eliminate unnecessary partition lookups (#8484)


--
[...truncated 1.84 MB...]
kafka.api.PlaintextAdminIntegrationTest > testInvalidIncrementalAlterConfigs 
PASSED

kafka.api.PlaintextAdminIntegrationTest > testSeekAfterDeleteRecords STARTED

kafka.api.PlaintextAdminIntegrationTest > testSeekAfterDeleteRecords PASSED

kafka.api.PlaintextAdminIntegrationTest > testCallInFlightTimeouts STARTED

kafka.api.PlaintextAdminIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.PlaintextAdminIntegrationTest > testNullConfigs STARTED

kafka.api.PlaintextAdminIntegrationTest > testNullConfigs PASSED

kafka.api.PlaintextAdminIntegrationTest > testDescribeConfigsForTopic STARTED

kafka.api.PlaintextAdminIntegrationTest > testDescribeConfigsForTopic PASSED

kafka.api.PlaintextAdminIntegrationTest > testConsumerGroups STARTED

kafka.api.PlaintextAdminIntegrationTest > testConsumerGroups PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersWhenNoLiveBrokers STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersWhenNoLiveBrokers PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException PASSED

kafka.api.PlaintextAdminIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.PlaintextAdminIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.PlaintextAdminIntegrationTest > testAuthorizedOperations STARTED

kafka.api.PlaintextAdminIntegrationTest > testAuthorizedOperations PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclDescribe STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclDescribe PASSED

kafka.api.SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed STARTED

kafka.api.SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed PASSED

kafka.api.SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig STARTED

kafka.api.SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig PASSED

kafka.api.SaslSslAdminIntegrationTest > 

Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-04-15 Thread Colin McCabe
Hi Boyang,

I agree that we need a version bump on the request types we are going to 
forward.  The new versions will be able to return the NOT_CONTROLLER error, and 
let the client do the retrying, which is what we typically prefer.  The  
existing versions can't ever return NOT_CONTROLLER.

Since we have to have a new version for all these requests, we could 
technically do everything with just optional fields, like we originally 
discussed.  However, there is probably some value in having a real 
EnvelopeRequest (or whatever) that makes it clearer what is going on.  Optional 
fields don't come with "guard rails" to prevent us from accidentally ignoring 
them on older versions of the broker.  A new ApiKey certainly does.

Another issue is that it's nice to avoid changing the version of the request 
when forwarding it.  Sometimes different versions have slightly different 
semantics, and it simplifies things to avoid worrying about that.

We should restrict the use of forwarding to just principals that have 
CLUSTERACTION on CLUSTER for now, so that only the brokers and superusers can 
do it.

best,
Colin


On Tue, Apr 14, 2020, at 13:15, Boyang Chen wrote:
> Thanks Raymond for the proposal! Yea, adding a unified forwarding envelope
> request is a good idea, but it doesn't really solve our problem in this KIP.
> 
> On Mon, Apr 13, 2020 at 11:14 PM Raymond Ng  wrote:
> 
> > Hi Boyang,
> >
> > Thanks for the KIP. Overall looks great.
> >
> > One suggestion: instead of bumping the version and adding an optional field
> > (PrincipalName) for a number of requests, can we consider adding a general
> > ProxyRequest that acts as an "envelope" for the forwarded requests?
> >
> > A few advantages to this approach come to mind:
> >
> >1. Add one (new) Request API instead of modifying a number of them
> >2. Make the forwarded nature of the request explicit instead of
> >implicitly relying on an optional field and a specific version that
> > varies
> >by type.
> >3. This approach is flexible enough to be potentially useful beyond the
> >current use case (e.g. federated, inter-cluster scenarios)
> >
> > As a bonus, the combination of 1. and 2. should also simplify
> > implementation & validation.
> >
> >
> Firstly the broker has to differentiate old and new admin clients as it
> should only support forwarding for old ones. Without a version bump, broker
> couldn't differentiate both. Besides the bumping of the existing
> protocol is not a big overhead comparing with adding a new RPC, so I don't
> worry too much about the complexity here.
> 
> 
> > On the other hand, it's not clear if the underlying RPC request
> > encoding/decode machinery supports embedded requests. Hopefully, even if it
> > doesn't it would not be too difficult to extend.
> >
> 
> Making the forwarding behavior more general is great, but could also come
> with problems we couldn't anticipate such as usage abusiveness, more
> changes to auto generation framework and increased metadata overhead. At
> the moment, we don't expect the direct forwarding would be a bottleneck, so
> I'm more inclined to make this proposal as simple as possible for now. If
> we do have a strong need in the future, getting the ProxyRequest is
> definitely worth the effort.
> 
> 
> > What do you think?
> >
> > Regards,
> > Ray
> >
> >
> > On Wed, Apr 8, 2020 at 4:36 PM Boyang Chen 
> > wrote:
> >
> > > Thanks for the info Agam! Will add to the KIP.
> > >
> > > On Wed, Apr 8, 2020 at 4:26 PM Agam Brahma  wrote:
> > >
> > > > Hi Boyang,
> > > >
> > > > The KIP already talks about incorporating changes for FindCoordinator
> > > > request routing, wanted to point out one additional case where internal
> > > > topics are created "as a side effect":
> > > >
> > > > As part of handling metadata requests, if we are looking for metadata
> > for
> > > > an internal topic and auto-topic-creation is enabled [1], the broker
> > > > currently goes ahead and creates the internal topic in the same way [2]
> > > as
> > > > it would for the FindCoordinator request.
> > > >
> > > > -Agam
> > > >
> > > > [1]
> > > >
> > > >
> > >
> > https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1096
> > > > [2]
> > > >
> > > >
> > >
> > https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1041
> > > >
> > > >
> > > >
> > > > On Mon, Apr 6, 2020 at 8:25 PM Boyang Chen  > >
> > > > wrote:
> > > >
> > > > > Thanks for the various inputs everyone!
> > > > >
> > > > > I think Sonke and Colin's suggestions make sense. The tagged field
> > also
> > > > > avoids the unnecessary protocol changes for affected requests. Will
> > add
> > > > it
> > > > > to the header. As for the verification, I'm not sure whether it's
> > > > necessary
> > > > > to require a higher permission level, as it is just an ignorable
> > field?
> > > > >
> > > > > 

[jira] [Resolved] (KAFKA-9838) Add additional log concurrency test cases

2020-04-15 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9838.

Resolution: Fixed

> Add additional log concurrency test cases
> -
>
> Key: KAFKA-9838
> URL: https://issues.apache.org/jira/browse/KAFKA-9838
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> A couple recent bug fixes affecting log read semantics were due to race 
> conditions with concurrent operations: see KAFKA-9807 and KAFKA-9835. We need 
> better testing of concurrent operations on the log to know if there are 
> additional problems and to prevent future regressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-04-15 Thread Colin McCabe
Hi Ismael,

I agree that sending these requests through the controller will not work during 
the periods when there is no controller.  However, those periods should be 
short-- otherwise we have bigger problems in the cluster.

These requests are very infrequent because they are administrative operations.  
Basically the affected operations are changing ACLs, changing dynamic 
configurations, and changing quotas.

best,
Colin


On Wed, Apr 15, 2020, at 15:25, Ismael Juma wrote:
> Hi Boyang,
> 
> Thanks for the KIP. Have we considered that this reduces availability for
> these operations since we have a single Controller instead of the ZK quorum?
> 
> Ismael
> 
> On Fri, Apr 3, 2020 at 4:45 PM Boyang Chen 
> wrote:
> 
> > Hey all,
> >
> > I would like to start off the discussion for KIP-590, a follow-up
> > initiative after KIP-500:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-590%3A+Redirect+Zookeeper+Mutation+Protocols+to+The+Controller
> >
> > This KIP proposes to migrate existing Zookeeper mutation paths, including
> > configuration, security and quota changes, to controller-only by always
> > routing these alterations to the controller.
> >
> > Let me know your thoughts!
> >
> > Best,
> > Boyang
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4437

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Eliminate unnecessary partition lookups (#8484)

[github] KAFKA-9864: Avoid expensive QuotaViolationException usage (#8477)

[github] MINOR: Serialize state change logs for handling LeaderAndIsr and


--
[...truncated 6.05 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources 

[VOTE] KIP-594: Expose output topic names from TopologyTestDriver

2020-04-15 Thread Andy Coates
Hey all,

I would like to start the vote for KIP-
594
.

Thanks,

Andy


updates on Kafka Summit in 2020

2020-04-15 Thread Jun Rao
Hi, Everyone,

Here is an update on the upcoming Kafka Summit events in 2020.

1. Unfortunately, Kafka Summit London, originally planned on Apr 27/28, has
been cancelled due to COVID-19.

2. Kafka Summit Austin (Aug 24/25) is still on. The CFP (
https://events.kafka-summit.org/kafka-summit-austin-2020) is open until May
1.

Thanks,

Jun


[jira] [Resolved] (KAFKA-9779) Add version 2.5 to streams system tests

2020-04-15 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9779.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Add version 2.5 to streams system tests
> ---
>
> Key: KAFKA-9779
> URL: https://issues.apache.org/jira/browse/KAFKA-9779
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, system tests
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Jun Rao
Hi, Kowshik,

Looks good to me now. Just a couple of minor things below.

200. In the validation section, there is still the text  "*from*
{"max_version_level":
X} *to* {"max_version_level": X’}". It seems that it should say "from X to
Y"?

110. Could we add that we need to document the bumped version of each
feature in the upgrade section of a release?

Thanks,

Jun

On Wed, Apr 15, 2020 at 1:08 PM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> Thank you for the suggestion! I have updated the KIP, please find my
> response below.
>
> > 200. I guess you are saying only when the allowDowngrade field is set,
> the
> > finalized feature version can go backward. Otherwise, it can only go up.
> > That makes sense. It would be useful to make that clear when explaining
> > the usage of the allowDowngrade field. In the validation section, we
> have  "
> > /features' from {"max_version_level": X} to {"max_version_level": X’}",
> it
> > seems that we need to mention Y there.
>
> (Kowshik): Great point! Yes, that is correct. Done, I have updated the
> validations
> section explaining the above. Here is a link to this section:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
>
>
> Cheers,
> Kowshik
>
>
>
>
> On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:
>
> > Hi, Kowshik,
> >
> > 200. I guess you are saying only when the allowDowngrade field is set,
> the
> > finalized feature version can go backward. Otherwise, it can only go up.
> > That makes sense. It would be useful to make that clear when explaining
> > the usage of the allowDowngrade field. In the validation section, we have
> > "
> > /features' from {"max_version_level": X} to {"max_version_level": X’}",
> it
> > seems that we need to mention Y there.
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam <
> kpraka...@confluent.io>
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Great question! Please find my response below.
> > >
> > > > 200. My understanding is that If the CLI tool passes the
> > > > '--allow-downgrade' flag when updating a specific feature, then a
> > future
> > > > downgrade is possible. Otherwise, the feature is now downgradable. If
> > so,
> > > I
> > > > was wondering how the controller remembers this since it can be
> > restarted
> > > > over time?
> > >
> > > (Kowshik): The purpose of the flag was to just restrict the user intent
> > for
> > > a specific request.
> > > It seems to me that to avoid confusion, I could call the flag as
> > > `--try-downgrade` instead.
> > > Then this makes it clear, that, the controller just has to consider the
> > ask
> > > from
> > > the user as an explicit request to attempt a downgrade.
> > >
> > > The flag does not act as an override on controller's decision making
> that
> > > decides whether
> > > a flag is downgradable (these decisions on whether to allow a flag to
> be
> > > downgraded
> > > from a specific version level, can be embedded in the controller code).
> > >
> > > Please let me know what you think.
> > > Sorry if I misunderstood the original question.
> > >
> > >
> > > Cheers,
> > > Kowshik
> > >
> > >
> > > On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:
> > >
> > > > Hi, Kowshik,
> > > >
> > > > Thanks for the reply. Makes sense. Just one more question.
> > > >
> > > > 200. My understanding is that If the CLI tool passes the
> > > > '--allow-downgrade' flag when updating a specific feature, then a
> > future
> > > > downgrade is possible. Otherwise, the feature is now downgradable. If
> > > so, I
> > > > was wondering how the controller remembers this since it can be
> > restarted
> > > > over time?
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam <
> > kpraka...@confluent.io
> > > >
> > > > wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > Thanks a lot for the feedback and the questions!
> > > > > Please find my response below.
> > > > >
> > > > > > 200. The UpdateFeaturesRequest includes an AllowDowngrade field.
> It
> > > > seems
> > > > > > that field needs to be persisted somewhere in ZK?
> > > > >
> > > > > (Kowshik): Great question! Below is my explanation. Please help me
> > > > > understand,
> > > > > if you feel there are cases where we would need to still persist it
> > in
> > > > ZK.
> > > > >
> > > > > Firstly I have updated my thoughts into the KIP now, under the
> > > > 'guidelines'
> > > > > section:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows
> > > > >
> > > > > The allowDowngrade boolean field is just to restrict the user
> intent,
> > > and
> > > > > to remind
> > > > > them to double check their intent before proceeding. It should be
> set
> > > to
> > > > > true
> > > > > by the user in a request, only when the user intent is to
> forcefully

Jenkins build is back to normal : kafka-2.5-jdk8 #94

2020-04-15 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-04-15 Thread Ismael Juma
Hi Boyang,

Thanks for the KIP. Have we considered that this reduces availability for
these operations since we have a single Controller instead of the ZK quorum?

Ismael

On Fri, Apr 3, 2020 at 4:45 PM Boyang Chen 
wrote:

> Hey all,
>
> I would like to start off the discussion for KIP-590, a follow-up
> initiative after KIP-500:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-590%3A+Redirect+Zookeeper+Mutation+Protocols+to+The+Controller
>
> This KIP proposes to migrate existing Zookeeper mutation paths, including
> configuration, security and quota changes, to controller-only by always
> routing these alterations to the controller.
>
> Let me know your thoughts!
>
> Best,
> Boyang
>


Re: [RESULTS] [VOTE] 2.5.0 RC3

2020-04-15 Thread Israel Ekpo
Thanks for doing a fantastic job with this release, David.

On Tue, Apr 14, 2020 at 11:15 AM David Arthur  wrote:

> Thanks everyone! The vote passes with 7 +1 votes (4 of which are binding)
> and no 0 or -1 votes.
>
> 4 binding +1 votes from PMC members Manikumar, Jun, Colin, and Matthias
> 1 committer +1 vote from Bill
> 2 community +1 votes from Israel Ekpo and Jonathan Santilli
>
> Voting email thread
>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/202004.mbox/%3CCA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com%3E
>
> I'll continue with the release steps and send out the announcement email
> soon.
>
> -David
>
> On Tue, Apr 14, 2020 at 7:17 AM Jonathan Santilli <
> jonathansanti...@gmail.com> wrote:
>
> > Hello,
> >
> > I have ran the tests (passed)
> > Follow the quick start guide with scala 2.12 (success)
> > +1
> >
> >
> > Thanks!
> > --
> > Jonathan
> >
> > On Tue, Apr 14, 2020 at 1:16 AM Colin McCabe  wrote:
> >
> >> +1 (binding)
> >>
> >> verified checksums
> >> ran unitTest
> >> ran check
> >>
> >> best,
> >> Colin
> >>
> >> On Tue, Apr 7, 2020, at 21:03, David Arthur wrote:
> >> > Hello Kafka users, developers and client-developers,
> >> >
> >> > This is the forth candidate for release of Apache Kafka 2.5.0.
> >> >
> >> > * TLS 1.3 support (1.2 is now the default)
> >> > * Co-groups for Kafka Streams
> >> > * Incremental rebalance for Kafka Consumer
> >> > * New metrics for better operational insight
> >> > * Upgrade Zookeeper to 3.5.7
> >> > * Deprecate support for Scala 2.11
> >> >
> >> > Release notes for the 2.5.0 release:
> >> >
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
> >> >
> >> > *** Please download, test and vote by Friday April 10th 5pm PT
> >> >
> >> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >> > https://kafka.apache.org/KEYS
> >> >
> >> > * Release artifacts to be voted upon (source and binary):
> >> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
> >> >
> >> > * Maven artifacts to be voted upon:
> >> >
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >> >
> >> > * Javadoc:
> >> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
> >> >
> >> > * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> >> > https://github.com/apache/kafka/releases/tag/2.5.0-rc3
> >> >
> >> > * Documentation:
> >> > https://kafka.apache.org/25/documentation.html
> >> >
> >> > * Protocol:
> >> > https://kafka.apache.org/25/protocol.html
> >> >
> >> > Successful Jenkins builds to follow
> >> >
> >> > Thanks!
> >> > David
> >> >
> >>
> >> > --
> >> >  You received this message because you are subscribed to the Google
> >> Groups "kafka-clients" group.
> >> >  To unsubscribe from this group and stop receiving emails from it,
> send
> >> an email to kafka-clients+unsubscr...@googlegroups.com.
> >> >  To view this discussion on the web visit
> >>
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com
> >> <
> >>
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com?utm_medium=email_source=footer
> >> >.
> >>
> >
> >
> > --
> > Santilli Jonathan
> >
>
>
> --
> David Arthur
>


Re: [ANNOUNCE] Apache Kafka 2.5.0

2020-04-15 Thread Randall Hauch
Thanks, David!

Congratulations to the whole AK community, and thanks to everyone that
contributed!

On Wed, Apr 15, 2020 at 4:47 PM Sönke Liebau
 wrote:

> Thanks David!!
>
>
> On Wed, 15 Apr 2020 at 23:07, Bill Bejeck  wrote:
>
> > David,
> >
> > Thanks for running the release!
> >
> > -Bill
> >
> > On Wed, Apr 15, 2020 at 4:45 PM Matthias J. Sax 
> wrote:
> >
> > > Thanks for running the release David!
> > >
> > > -Matthias
> > >
> > > On 4/15/20 1:15 PM, David Arthur wrote:
> > > > The Apache Kafka community is pleased to announce the release for
> > Apache
> > > > Kafka 2.5.0
> > > >
> > > > This release includes many new features, including:
> > > >
> > > > * TLS 1.3 support (1.2 is now the default)
> > > > * Co-groups for Kafka Streams
> > > > * Incremental rebalance for Kafka Consumer
> > > > * New metrics for better operational insight
> > > > * Upgrade Zookeeper to 3.5.7
> > > > * Deprecate support for Scala 2.11
> > > >
> > > > All of the changes in this release can be found in the release notes:
> > > > https://www.apache.org/dist/kafka/2.5.0/RELEASE_NOTES.html
> > > >
> > > >
> > > > You can download the source and binary release (Scala 2.12 and 2.13)
> > > from:
> > > > https://kafka.apache.org/downloads#2.5.0
> > > >
> > > >
> > >
> >
> ---
> > > >
> > > >
> > > > Apache Kafka is a distributed streaming platform with four core APIs:
> > > >
> > > >
> > > > ** The Producer API allows an application to publish a stream records
> > to
> > > > one or more Kafka topics.
> > > >
> > > > ** The Consumer API allows an application to subscribe to one or more
> > > > topics and process the stream of records produced to them.
> > > >
> > > > ** The Streams API allows an application to act as a stream
> processor,
> > > > consuming an input stream from one or more topics and producing an
> > > > output stream to one or more output topics, effectively transforming
> > the
> > > > input streams to output streams.
> > > >
> > > > ** The Connector API allows building and running reusable producers
> or
> > > > consumers that connect Kafka topics to existing applications or data
> > > > systems. For example, a connector to a relational database might
> > > > capture every change to a table.
> > > >
> > > >
> > > > With these APIs, Kafka can be used for two broad classes of
> > application:
> > > >
> > > > ** Building real-time streaming data pipelines that reliably get data
> > > > between systems or applications.
> > > >
> > > > ** Building real-time streaming applications that transform or react
> > > > to the streams of data.
> > > >
> > > >
> > > > Apache Kafka is in use at large and small companies worldwide,
> > including
> > > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> > Rabobank,
> > > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > >
> > > > A big thank you for the following 108 contributors to this release!
> > > >
> > > > A. Sophie Blee-Goldman, Adam Bellemare, Alaa Zbair, Alex Kokachev,
> Alex
> > > > Leung, Alex Mironov, Alice, Andrew Olson, Andy Coates, Anna Povzner,
> > > Antony
> > > > Stubbs, Arvind Thirunarayanan, belugabehr, bill, Bill Bejeck, Bob
> > > Barrett,
> > > > Boyang Chen, Brian Bushree, Brian Byrne, Bruno Cadonna, Bryan Ji,
> > > Chia-Ping
> > > > Tsai, Chris Egerton, Chris Pettitt, Chris Stromberger, Colin P.
> Mccabe,
> > > > Colin Patrick McCabe, commandini, Cyrus Vafadari, Dae-Ho Kim, David
> > > Arthur,
> > > > David Jacot, David Kim, David Mao, dengziming, Dhruvil Shah, Edoardo
> > > Comar,
> > > > Eduardo Pinto, Fábio Silva, gkomissarov, Grant Henke, Greg Harris,
> > Gunnar
> > > > Morling, Guozhang Wang, Harsha Laxman, high.lee, highluck, Hossein
> > > Torabi,
> > > > huxi, huxihx, Ismael Juma, Ivan Yurchenko, Jason Gustafson,
> jiameixie,
> > > John
> > > > Roesler, José Armando García Sancio, Jukka Karvanen, Karan Kumar,
> Kevin
> > > Lu,
> > > > Konstantine Karantasis, Lee Dongjin, Lev Zemlyanov, Levani
> Kokhreidze,
> > > > Lucas Bradstreet, Manikumar Reddy, Mathias Kub, Matthew Wong,
> Matthias
> > J.
> > > > Sax, Michael Gyarmathy, Michael Viamari, Mickael Maison, Mitch,
> > > > mmanna-sapfgl, NanerLee, Narek Karapetian, Navinder Pal Singh Brar,
> > > > nicolasguyomar, Nigel Liang, NIkhil Bhatia, Nikolay, ning2008wisc,
> > Omkar
> > > > Mestry, Rajini Sivaram, Randall Hauch, ravowlga123, Raymond Ng, Ron
> > > > Dagostino, sainath batthala, Sanjana Kaundinya, Scott, Seungha Lee,
> > Simon
> > > > Clark, Stanislav Kozlovski, Svend Vanderveken, Sönke Liebau, Ted Yu,
> > Tom
> > > > Bentley, Tomislav, Tu Tran, Tu V. Tran, uttpal, Vikas Singh, Viktor
> > > > Somogyi, vinoth chandar, wcarlson5, Will James, Xin Wang, zzccctv
> > > >
> > > > We welcome your help and feedback. For more information on how to
> > > > report problems, and to get involved, visit the project website at
> > > > https://kafka.apache.org/
> > > >
> > > 

Re: [ANNOUNCE] Apache Kafka 2.5.0

2020-04-15 Thread Sönke Liebau
Thanks David!!


On Wed, 15 Apr 2020 at 23:07, Bill Bejeck  wrote:

> David,
>
> Thanks for running the release!
>
> -Bill
>
> On Wed, Apr 15, 2020 at 4:45 PM Matthias J. Sax  wrote:
>
> > Thanks for running the release David!
> >
> > -Matthias
> >
> > On 4/15/20 1:15 PM, David Arthur wrote:
> > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > Kafka 2.5.0
> > >
> > > This release includes many new features, including:
> > >
> > > * TLS 1.3 support (1.2 is now the default)
> > > * Co-groups for Kafka Streams
> > > * Incremental rebalance for Kafka Consumer
> > > * New metrics for better operational insight
> > > * Upgrade Zookeeper to 3.5.7
> > > * Deprecate support for Scala 2.11
> > >
> > > All of the changes in this release can be found in the release notes:
> > > https://www.apache.org/dist/kafka/2.5.0/RELEASE_NOTES.html
> > >
> > >
> > > You can download the source and binary release (Scala 2.12 and 2.13)
> > from:
> > > https://kafka.apache.org/downloads#2.5.0
> > >
> > >
> >
> ---
> > >
> > >
> > > Apache Kafka is a distributed streaming platform with four core APIs:
> > >
> > >
> > > ** The Producer API allows an application to publish a stream records
> to
> > > one or more Kafka topics.
> > >
> > > ** The Consumer API allows an application to subscribe to one or more
> > > topics and process the stream of records produced to them.
> > >
> > > ** The Streams API allows an application to act as a stream processor,
> > > consuming an input stream from one or more topics and producing an
> > > output stream to one or more output topics, effectively transforming
> the
> > > input streams to output streams.
> > >
> > > ** The Connector API allows building and running reusable producers or
> > > consumers that connect Kafka topics to existing applications or data
> > > systems. For example, a connector to a relational database might
> > > capture every change to a table.
> > >
> > >
> > > With these APIs, Kafka can be used for two broad classes of
> application:
> > >
> > > ** Building real-time streaming data pipelines that reliably get data
> > > between systems or applications.
> > >
> > > ** Building real-time streaming applications that transform or react
> > > to the streams of data.
> > >
> > >
> > > Apache Kafka is in use at large and small companies worldwide,
> including
> > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> Rabobank,
> > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > >
> > > A big thank you for the following 108 contributors to this release!
> > >
> > > A. Sophie Blee-Goldman, Adam Bellemare, Alaa Zbair, Alex Kokachev, Alex
> > > Leung, Alex Mironov, Alice, Andrew Olson, Andy Coates, Anna Povzner,
> > Antony
> > > Stubbs, Arvind Thirunarayanan, belugabehr, bill, Bill Bejeck, Bob
> > Barrett,
> > > Boyang Chen, Brian Bushree, Brian Byrne, Bruno Cadonna, Bryan Ji,
> > Chia-Ping
> > > Tsai, Chris Egerton, Chris Pettitt, Chris Stromberger, Colin P. Mccabe,
> > > Colin Patrick McCabe, commandini, Cyrus Vafadari, Dae-Ho Kim, David
> > Arthur,
> > > David Jacot, David Kim, David Mao, dengziming, Dhruvil Shah, Edoardo
> > Comar,
> > > Eduardo Pinto, Fábio Silva, gkomissarov, Grant Henke, Greg Harris,
> Gunnar
> > > Morling, Guozhang Wang, Harsha Laxman, high.lee, highluck, Hossein
> > Torabi,
> > > huxi, huxihx, Ismael Juma, Ivan Yurchenko, Jason Gustafson, jiameixie,
> > John
> > > Roesler, José Armando García Sancio, Jukka Karvanen, Karan Kumar, Kevin
> > Lu,
> > > Konstantine Karantasis, Lee Dongjin, Lev Zemlyanov, Levani Kokhreidze,
> > > Lucas Bradstreet, Manikumar Reddy, Mathias Kub, Matthew Wong, Matthias
> J.
> > > Sax, Michael Gyarmathy, Michael Viamari, Mickael Maison, Mitch,
> > > mmanna-sapfgl, NanerLee, Narek Karapetian, Navinder Pal Singh Brar,
> > > nicolasguyomar, Nigel Liang, NIkhil Bhatia, Nikolay, ning2008wisc,
> Omkar
> > > Mestry, Rajini Sivaram, Randall Hauch, ravowlga123, Raymond Ng, Ron
> > > Dagostino, sainath batthala, Sanjana Kaundinya, Scott, Seungha Lee,
> Simon
> > > Clark, Stanislav Kozlovski, Svend Vanderveken, Sönke Liebau, Ted Yu,
> Tom
> > > Bentley, Tomislav, Tu Tran, Tu V. Tran, uttpal, Vikas Singh, Viktor
> > > Somogyi, vinoth chandar, wcarlson5, Will James, Xin Wang, zzccctv
> > >
> > > We welcome your help and feedback. For more information on how to
> > > report problems, and to get involved, visit the project website at
> > > https://kafka.apache.org/
> > >
> > > Thank you!
> > >
> > >
> > > Regards,
> > > David Arthur
> > >
> >
> >
>


-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


[jira] [Created] (KAFKA-9876) Implement Raft Protocol for Metadata Quorum

2020-04-15 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9876:
--

 Summary: Implement Raft Protocol for Metadata Quorum
 Key: KAFKA-9876
 URL: https://issues.apache.org/jira/browse/KAFKA-9876
 Project: Kafka
  Issue Type: New Feature
Reporter: Jason Gustafson
Assignee: Jason Gustafson


This tracks the completion of the Raft Protocol specified in KIP-595: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-595%3A+A+Raft+Protocol+for+the+Metadata+Quorum#KIP-595:ARaftProtocolfortheMetadataQuorum-ToolingSupport.
 If/when the KIP is approved by the community, we will create smaller sub-tasks 
to track overall prgress.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi all,

Thank you very much for all the insightful feedback!
How do you feel about the KIP?
Does the scope and the write up look OK to you, and is it time to call a
vote?


Cheers,
Kowshik

On Wed, Apr 15, 2020 at 1:08 PM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> Thank you for the suggestion! I have updated the KIP, please find my
> response below.
>
> > 200. I guess you are saying only when the allowDowngrade field is set,
> the
> > finalized feature version can go backward. Otherwise, it can only go up.
> > That makes sense. It would be useful to make that clear when explaining
> > the usage of the allowDowngrade field. In the validation section, we
> have  "
> > /features' from {"max_version_level": X} to {"max_version_level": X’}",
> it
> > seems that we need to mention Y there.
>
> (Kowshik): Great point! Yes, that is correct. Done, I have updated the
> validations
> section explaining the above. Here is a link to this section:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations
>
>
> Cheers,
> Kowshik
>
>
>
>
> On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:
>
>> Hi, Kowshik,
>>
>> 200. I guess you are saying only when the allowDowngrade field is set, the
>> finalized feature version can go backward. Otherwise, it can only go up.
>> That makes sense. It would be useful to make that clear when explaining
>> the usage of the allowDowngrade field. In the validation section, we
>> have  "
>> /features' from {"max_version_level": X} to {"max_version_level": X’}", it
>> seems that we need to mention Y there.
>>
>> Thanks,
>>
>> Jun
>>
>> On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam > >
>> wrote:
>>
>> > Hi Jun,
>> >
>> > Great question! Please find my response below.
>> >
>> > > 200. My understanding is that If the CLI tool passes the
>> > > '--allow-downgrade' flag when updating a specific feature, then a
>> future
>> > > downgrade is possible. Otherwise, the feature is now downgradable. If
>> so,
>> > I
>> > > was wondering how the controller remembers this since it can be
>> restarted
>> > > over time?
>> >
>> > (Kowshik): The purpose of the flag was to just restrict the user intent
>> for
>> > a specific request.
>> > It seems to me that to avoid confusion, I could call the flag as
>> > `--try-downgrade` instead.
>> > Then this makes it clear, that, the controller just has to consider the
>> ask
>> > from
>> > the user as an explicit request to attempt a downgrade.
>> >
>> > The flag does not act as an override on controller's decision making
>> that
>> > decides whether
>> > a flag is downgradable (these decisions on whether to allow a flag to be
>> > downgraded
>> > from a specific version level, can be embedded in the controller code).
>> >
>> > Please let me know what you think.
>> > Sorry if I misunderstood the original question.
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>> >
>> > On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:
>> >
>> > > Hi, Kowshik,
>> > >
>> > > Thanks for the reply. Makes sense. Just one more question.
>> > >
>> > > 200. My understanding is that If the CLI tool passes the
>> > > '--allow-downgrade' flag when updating a specific feature, then a
>> future
>> > > downgrade is possible. Otherwise, the feature is now downgradable. If
>> > so, I
>> > > was wondering how the controller remembers this since it can be
>> restarted
>> > > over time?
>> > >
>> > > Jun
>> > >
>> > >
>> > > On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam <
>> kpraka...@confluent.io
>> > >
>> > > wrote:
>> > >
>> > > > Hi Jun,
>> > > >
>> > > > Thanks a lot for the feedback and the questions!
>> > > > Please find my response below.
>> > > >
>> > > > > 200. The UpdateFeaturesRequest includes an AllowDowngrade field.
>> It
>> > > seems
>> > > > > that field needs to be persisted somewhere in ZK?
>> > > >
>> > > > (Kowshik): Great question! Below is my explanation. Please help me
>> > > > understand,
>> > > > if you feel there are cases where we would need to still persist it
>> in
>> > > ZK.
>> > > >
>> > > > Firstly I have updated my thoughts into the KIP now, under the
>> > > 'guidelines'
>> > > > section:
>> > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows
>> > > >
>> > > > The allowDowngrade boolean field is just to restrict the user
>> intent,
>> > and
>> > > > to remind
>> > > > them to double check their intent before proceeding. It should be
>> set
>> > to
>> > > > true
>> > > > by the user in a request, only when the user intent is to forcefully
>> > > > "attempt" a
>> > > > downgrade of a specific feature's max version level, to the provided
>> > > value
>> > > > in
>> > > > the request.
>> > > >
>> > > > We can extend this safeguard. The controller (on it's end) can
>> maintain
>> > > > rules in the code, that, for safety reasons would outright reject
>> 

Re: [ANNOUNCE] Apache Kafka 2.5.0

2020-04-15 Thread Bill Bejeck
David,

Thanks for running the release!

-Bill

On Wed, Apr 15, 2020 at 4:45 PM Matthias J. Sax  wrote:

> Thanks for running the release David!
>
> -Matthias
>
> On 4/15/20 1:15 PM, David Arthur wrote:
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 2.5.0
> >
> > This release includes many new features, including:
> >
> > * TLS 1.3 support (1.2 is now the default)
> > * Co-groups for Kafka Streams
> > * Incremental rebalance for Kafka Consumer
> > * New metrics for better operational insight
> > * Upgrade Zookeeper to 3.5.7
> > * Deprecate support for Scala 2.11
> >
> > All of the changes in this release can be found in the release notes:
> > https://www.apache.org/dist/kafka/2.5.0/RELEASE_NOTES.html
> >
> >
> > You can download the source and binary release (Scala 2.12 and 2.13)
> from:
> > https://kafka.apache.org/downloads#2.5.0
> >
> >
> ---
> >
> >
> > Apache Kafka is a distributed streaming platform with four core APIs:
> >
> >
> > ** The Producer API allows an application to publish a stream records to
> > one or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming an input stream from one or more topics and producing an
> > output stream to one or more output topics, effectively transforming the
> > input streams to output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers that connect Kafka topics to existing applications or data
> > systems. For example, a connector to a relational database might
> > capture every change to a table.
> >
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between systems or applications.
> >
> > ** Building real-time streaming applications that transform or react
> > to the streams of data.
> >
> >
> > Apache Kafka is in use at large and small companies worldwide, including
> > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> >
> > A big thank you for the following 108 contributors to this release!
> >
> > A. Sophie Blee-Goldman, Adam Bellemare, Alaa Zbair, Alex Kokachev, Alex
> > Leung, Alex Mironov, Alice, Andrew Olson, Andy Coates, Anna Povzner,
> Antony
> > Stubbs, Arvind Thirunarayanan, belugabehr, bill, Bill Bejeck, Bob
> Barrett,
> > Boyang Chen, Brian Bushree, Brian Byrne, Bruno Cadonna, Bryan Ji,
> Chia-Ping
> > Tsai, Chris Egerton, Chris Pettitt, Chris Stromberger, Colin P. Mccabe,
> > Colin Patrick McCabe, commandini, Cyrus Vafadari, Dae-Ho Kim, David
> Arthur,
> > David Jacot, David Kim, David Mao, dengziming, Dhruvil Shah, Edoardo
> Comar,
> > Eduardo Pinto, Fábio Silva, gkomissarov, Grant Henke, Greg Harris, Gunnar
> > Morling, Guozhang Wang, Harsha Laxman, high.lee, highluck, Hossein
> Torabi,
> > huxi, huxihx, Ismael Juma, Ivan Yurchenko, Jason Gustafson, jiameixie,
> John
> > Roesler, José Armando García Sancio, Jukka Karvanen, Karan Kumar, Kevin
> Lu,
> > Konstantine Karantasis, Lee Dongjin, Lev Zemlyanov, Levani Kokhreidze,
> > Lucas Bradstreet, Manikumar Reddy, Mathias Kub, Matthew Wong, Matthias J.
> > Sax, Michael Gyarmathy, Michael Viamari, Mickael Maison, Mitch,
> > mmanna-sapfgl, NanerLee, Narek Karapetian, Navinder Pal Singh Brar,
> > nicolasguyomar, Nigel Liang, NIkhil Bhatia, Nikolay, ning2008wisc, Omkar
> > Mestry, Rajini Sivaram, Randall Hauch, ravowlga123, Raymond Ng, Ron
> > Dagostino, sainath batthala, Sanjana Kaundinya, Scott, Seungha Lee, Simon
> > Clark, Stanislav Kozlovski, Svend Vanderveken, Sönke Liebau, Ted Yu, Tom
> > Bentley, Tomislav, Tu Tran, Tu V. Tran, uttpal, Vikas Singh, Viktor
> > Somogyi, vinoth chandar, wcarlson5, Will James, Xin Wang, zzccctv
> >
> > We welcome your help and feedback. For more information on how to
> > report problems, and to get involved, visit the project website at
> > https://kafka.apache.org/
> >
> > Thank you!
> >
> >
> > Regards,
> > David Arthur
> >
>
>


Re: [ANNOUNCE] Apache Kafka 2.5.0

2020-04-15 Thread Matthias J. Sax
Thanks for running the release David!

-Matthias

On 4/15/20 1:15 PM, David Arthur wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.5.0
> 
> This release includes many new features, including:
> 
> * TLS 1.3 support (1.2 is now the default)
> * Co-groups for Kafka Streams
> * Incremental rebalance for Kafka Consumer
> * New metrics for better operational insight
> * Upgrade Zookeeper to 3.5.7
> * Deprecate support for Scala 2.11
> 
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.5.0/RELEASE_NOTES.html
> 
> 
> You can download the source and binary release (Scala 2.12 and 2.13) from:
> https://kafka.apache.org/downloads#2.5.0
> 
> ---
> 
> 
> Apache Kafka is a distributed streaming platform with four core APIs:
> 
> 
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
> 
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
> 
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
> 
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
> 
> 
> With these APIs, Kafka can be used for two broad classes of application:
> 
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
> 
> ** Building real-time streaming applications that transform or react
> to the streams of data.
> 
> 
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
> 
> A big thank you for the following 108 contributors to this release!
> 
> A. Sophie Blee-Goldman, Adam Bellemare, Alaa Zbair, Alex Kokachev, Alex
> Leung, Alex Mironov, Alice, Andrew Olson, Andy Coates, Anna Povzner, Antony
> Stubbs, Arvind Thirunarayanan, belugabehr, bill, Bill Bejeck, Bob Barrett,
> Boyang Chen, Brian Bushree, Brian Byrne, Bruno Cadonna, Bryan Ji, Chia-Ping
> Tsai, Chris Egerton, Chris Pettitt, Chris Stromberger, Colin P. Mccabe,
> Colin Patrick McCabe, commandini, Cyrus Vafadari, Dae-Ho Kim, David Arthur,
> David Jacot, David Kim, David Mao, dengziming, Dhruvil Shah, Edoardo Comar,
> Eduardo Pinto, Fábio Silva, gkomissarov, Grant Henke, Greg Harris, Gunnar
> Morling, Guozhang Wang, Harsha Laxman, high.lee, highluck, Hossein Torabi,
> huxi, huxihx, Ismael Juma, Ivan Yurchenko, Jason Gustafson, jiameixie, John
> Roesler, José Armando García Sancio, Jukka Karvanen, Karan Kumar, Kevin Lu,
> Konstantine Karantasis, Lee Dongjin, Lev Zemlyanov, Levani Kokhreidze,
> Lucas Bradstreet, Manikumar Reddy, Mathias Kub, Matthew Wong, Matthias J.
> Sax, Michael Gyarmathy, Michael Viamari, Mickael Maison, Mitch,
> mmanna-sapfgl, NanerLee, Narek Karapetian, Navinder Pal Singh Brar,
> nicolasguyomar, Nigel Liang, NIkhil Bhatia, Nikolay, ning2008wisc, Omkar
> Mestry, Rajini Sivaram, Randall Hauch, ravowlga123, Raymond Ng, Ron
> Dagostino, sainath batthala, Sanjana Kaundinya, Scott, Seungha Lee, Simon
> Clark, Stanislav Kozlovski, Svend Vanderveken, Sönke Liebau, Ted Yu, Tom
> Bentley, Tomislav, Tu Tran, Tu V. Tran, uttpal, Vikas Singh, Viktor
> Somogyi, vinoth chandar, wcarlson5, Will James, Xin Wang, zzccctv
> 
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
> 
> Thank you!
> 
> 
> Regards,
> David Arthur
> 



signature.asc
Description: OpenPGP digital signature


Re: [ANNOUNCE] Apache Kafka 2.5.0

2020-04-15 Thread Boyang Chen
Thanks David for taking this initiative, great work!

On Wed, Apr 15, 2020 at 1:15 PM David Arthur  wrote:

> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.5.0
>
> This release includes many new features, including:
>
> * TLS 1.3 support (1.2 is now the default)
> * Co-groups for Kafka Streams
> * Incremental rebalance for Kafka Consumer
> * New metrics for better operational insight
> * Upgrade Zookeeper to 3.5.7
> * Deprecate support for Scala 2.11
>
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.5.0/RELEASE_NOTES.html
>
>
> You can download the source and binary release (Scala 2.12 and 2.13) from:
> https://kafka.apache.org/downloads#2.5.0
>
>
> ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 108 contributors to this release!
>
> A. Sophie Blee-Goldman, Adam Bellemare, Alaa Zbair, Alex Kokachev, Alex
> Leung, Alex Mironov, Alice, Andrew Olson, Andy Coates, Anna Povzner, Antony
> Stubbs, Arvind Thirunarayanan, belugabehr, bill, Bill Bejeck, Bob Barrett,
> Boyang Chen, Brian Bushree, Brian Byrne, Bruno Cadonna, Bryan Ji, Chia-Ping
> Tsai, Chris Egerton, Chris Pettitt, Chris Stromberger, Colin P. Mccabe,
> Colin Patrick McCabe, commandini, Cyrus Vafadari, Dae-Ho Kim, David Arthur,
> David Jacot, David Kim, David Mao, dengziming, Dhruvil Shah, Edoardo Comar,
> Eduardo Pinto, Fábio Silva, gkomissarov, Grant Henke, Greg Harris, Gunnar
> Morling, Guozhang Wang, Harsha Laxman, high.lee, highluck, Hossein Torabi,
> huxi, huxihx, Ismael Juma, Ivan Yurchenko, Jason Gustafson, jiameixie, John
> Roesler, José Armando García Sancio, Jukka Karvanen, Karan Kumar, Kevin Lu,
> Konstantine Karantasis, Lee Dongjin, Lev Zemlyanov, Levani Kokhreidze,
> Lucas Bradstreet, Manikumar Reddy, Mathias Kub, Matthew Wong, Matthias J.
> Sax, Michael Gyarmathy, Michael Viamari, Mickael Maison, Mitch,
> mmanna-sapfgl, NanerLee, Narek Karapetian, Navinder Pal Singh Brar,
> nicolasguyomar, Nigel Liang, NIkhil Bhatia, Nikolay, ning2008wisc, Omkar
> Mestry, Rajini Sivaram, Randall Hauch, ravowlga123, Raymond Ng, Ron
> Dagostino, sainath batthala, Sanjana Kaundinya, Scott, Seungha Lee, Simon
> Clark, Stanislav Kozlovski, Svend Vanderveken, Sönke Liebau, Ted Yu, Tom
> Bentley, Tomislav, Tu Tran, Tu V. Tran, uttpal, Vikas Singh, Viktor
> Somogyi, vinoth chandar, wcarlson5, Will James, Xin Wang, zzccctv
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
>
> Regards,
> David Arthur
>


[ANNOUNCE] Apache Kafka 2.5.0

2020-04-15 Thread David Arthur
The Apache Kafka community is pleased to announce the release for Apache
Kafka 2.5.0

This release includes many new features, including:

* TLS 1.3 support (1.2 is now the default)
* Co-groups for Kafka Streams
* Incremental rebalance for Kafka Consumer
* New metrics for better operational insight
* Upgrade Zookeeper to 3.5.7
* Deprecate support for Scala 2.11

All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/2.5.0/RELEASE_NOTES.html


You can download the source and binary release (Scala 2.12 and 2.13) from:
https://kafka.apache.org/downloads#2.5.0

---


Apache Kafka is a distributed streaming platform with four core APIs:


** The Producer API allows an application to publish a stream records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 108 contributors to this release!

A. Sophie Blee-Goldman, Adam Bellemare, Alaa Zbair, Alex Kokachev, Alex
Leung, Alex Mironov, Alice, Andrew Olson, Andy Coates, Anna Povzner, Antony
Stubbs, Arvind Thirunarayanan, belugabehr, bill, Bill Bejeck, Bob Barrett,
Boyang Chen, Brian Bushree, Brian Byrne, Bruno Cadonna, Bryan Ji, Chia-Ping
Tsai, Chris Egerton, Chris Pettitt, Chris Stromberger, Colin P. Mccabe,
Colin Patrick McCabe, commandini, Cyrus Vafadari, Dae-Ho Kim, David Arthur,
David Jacot, David Kim, David Mao, dengziming, Dhruvil Shah, Edoardo Comar,
Eduardo Pinto, Fábio Silva, gkomissarov, Grant Henke, Greg Harris, Gunnar
Morling, Guozhang Wang, Harsha Laxman, high.lee, highluck, Hossein Torabi,
huxi, huxihx, Ismael Juma, Ivan Yurchenko, Jason Gustafson, jiameixie, John
Roesler, José Armando García Sancio, Jukka Karvanen, Karan Kumar, Kevin Lu,
Konstantine Karantasis, Lee Dongjin, Lev Zemlyanov, Levani Kokhreidze,
Lucas Bradstreet, Manikumar Reddy, Mathias Kub, Matthew Wong, Matthias J.
Sax, Michael Gyarmathy, Michael Viamari, Mickael Maison, Mitch,
mmanna-sapfgl, NanerLee, Narek Karapetian, Navinder Pal Singh Brar,
nicolasguyomar, Nigel Liang, NIkhil Bhatia, Nikolay, ning2008wisc, Omkar
Mestry, Rajini Sivaram, Randall Hauch, ravowlga123, Raymond Ng, Ron
Dagostino, sainath batthala, Sanjana Kaundinya, Scott, Seungha Lee, Simon
Clark, Stanislav Kozlovski, Svend Vanderveken, Sönke Liebau, Ted Yu, Tom
Bentley, Tomislav, Tu Tran, Tu V. Tran, uttpal, Vikas Singh, Viktor
Somogyi, vinoth chandar, wcarlson5, Will James, Xin Wang, zzccctv

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!


Regards,
David Arthur


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi Jun,

Thank you for the suggestion! I have updated the KIP, please find my
response below.

> 200. I guess you are saying only when the allowDowngrade field is set, the
> finalized feature version can go backward. Otherwise, it can only go up.
> That makes sense. It would be useful to make that clear when explaining
> the usage of the allowDowngrade field. In the validation section, we
have  "
> /features' from {"max_version_level": X} to {"max_version_level": X’}", it
> seems that we need to mention Y there.

(Kowshik): Great point! Yes, that is correct. Done, I have updated the
validations
section explaining the above. Here is a link to this section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Validations


Cheers,
Kowshik




On Wed, Apr 15, 2020 at 11:05 AM Jun Rao  wrote:

> Hi, Kowshik,
>
> 200. I guess you are saying only when the allowDowngrade field is set, the
> finalized feature version can go backward. Otherwise, it can only go up.
> That makes sense. It would be useful to make that clear when explaining
> the usage of the allowDowngrade field. In the validation section, we have
> "
> /features' from {"max_version_level": X} to {"max_version_level": X’}", it
> seems that we need to mention Y there.
>
> Thanks,
>
> Jun
>
> On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Great question! Please find my response below.
> >
> > > 200. My understanding is that If the CLI tool passes the
> > > '--allow-downgrade' flag when updating a specific feature, then a
> future
> > > downgrade is possible. Otherwise, the feature is now downgradable. If
> so,
> > I
> > > was wondering how the controller remembers this since it can be
> restarted
> > > over time?
> >
> > (Kowshik): The purpose of the flag was to just restrict the user intent
> for
> > a specific request.
> > It seems to me that to avoid confusion, I could call the flag as
> > `--try-downgrade` instead.
> > Then this makes it clear, that, the controller just has to consider the
> ask
> > from
> > the user as an explicit request to attempt a downgrade.
> >
> > The flag does not act as an override on controller's decision making that
> > decides whether
> > a flag is downgradable (these decisions on whether to allow a flag to be
> > downgraded
> > from a specific version level, can be embedded in the controller code).
> >
> > Please let me know what you think.
> > Sorry if I misunderstood the original question.
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> > On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:
> >
> > > Hi, Kowshik,
> > >
> > > Thanks for the reply. Makes sense. Just one more question.
> > >
> > > 200. My understanding is that If the CLI tool passes the
> > > '--allow-downgrade' flag when updating a specific feature, then a
> future
> > > downgrade is possible. Otherwise, the feature is now downgradable. If
> > so, I
> > > was wondering how the controller remembers this since it can be
> restarted
> > > over time?
> > >
> > > Jun
> > >
> > >
> > > On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam <
> kpraka...@confluent.io
> > >
> > > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Thanks a lot for the feedback and the questions!
> > > > Please find my response below.
> > > >
> > > > > 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It
> > > seems
> > > > > that field needs to be persisted somewhere in ZK?
> > > >
> > > > (Kowshik): Great question! Below is my explanation. Please help me
> > > > understand,
> > > > if you feel there are cases where we would need to still persist it
> in
> > > ZK.
> > > >
> > > > Firstly I have updated my thoughts into the KIP now, under the
> > > 'guidelines'
> > > > section:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows
> > > >
> > > > The allowDowngrade boolean field is just to restrict the user intent,
> > and
> > > > to remind
> > > > them to double check their intent before proceeding. It should be set
> > to
> > > > true
> > > > by the user in a request, only when the user intent is to forcefully
> > > > "attempt" a
> > > > downgrade of a specific feature's max version level, to the provided
> > > value
> > > > in
> > > > the request.
> > > >
> > > > We can extend this safeguard. The controller (on it's end) can
> maintain
> > > > rules in the code, that, for safety reasons would outright reject
> > certain
> > > > downgrades
> > > > from a specific max_version_level for a specific feature. Such
> > rejections
> > > > may
> > > > happen depending on the feature being downgraded, and from what
> version
> > > > level.
> > > >
> > > > The CLI tool only allows a downgrade attempt in conjunction with
> > specific
> > > > flags and sub-commands. For example, in the CLI tool, if the user
> uses
> > > the
> > > > 'downgrade-all' 

Build failed in Jenkins: kafka-trunk-jdk8 #4436

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[gwen] MINOR: avoid autoboxing in FetchRequest.PartitionData.equals


--
[...truncated 6.05 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task 

Re: [DISCUSS] KIP-587 Suppress detailed responses for handled exceptions in security-sensitive environments

2020-04-15 Thread Connor Penhale
Hi Chris,

I can ask the customer if they can disclose any additional information. I 
provided the information around "PCI-DSS" to give the community a flavor of the 
type of environment the customer was operating in. The current mode is /not/ 
insecure, I would agree with this. I would be willing to agree that my customer 
has particular security audit requirements that go above and beyond what most 
environments would consider reasonable. Are you comfortable with that language?

" enable.rest.response.stack.traces" works great for me!

I created a new class in the example PR because I wanted the highest chance of 
not gunking up the works by stepping on toes in an important class. I figured 
I'd be reducing risk by creating an alternative implementing class. In 
retrospect, and now that I'm getting a first-hand look at Kafka's community 
process, that is probably unnecessary. Additionally, I would agree with your 
statement that we should modify the existing ExceptionMapper to avoid behavior 
divergence in subsequent releases and ensure this feature's particular scope is 
easy to maintain.

Thanks!
Connor

On 4/15/20, 1:17 PM, "Colin McCabe"  wrote:

Hi Connor,

I still would like to hear more about whether this feature is required for 
PCI-DSS or any other security certification.  Nobody I talked to seemed to 
think that it was-- if there are certifications that would require this, it 
would be nice to know.  However, I don't object to implementing this as long as 
we don't imply that the current mode is insecure.

What do you think about using "enable.rest.response.stack.traces" as the 
config name?  It seems like that  makes it clearer that it's a boolean value.

It's not really necessary to describe the internal implementation in the 
KIP, but since you mentioned it, it's probably worth considering using the 
current ExceptionMapper class with a different configuration rather than 
creating a new one.

best,
Colin


On Mon, Apr 13, 2020, at 09:04, Connor Penhale wrote:
> Hi Chris!
>
> RE: SSL, indeed, the issue is not that the information is not
> encrypted, but that there is no authorization layer.
>
> I'll be sure to edit the KIP as we continue discussion!
>
> RE: the 200 response you highlighted, great catch! I'll work with my
> customer and get back to you on their audit team's intention! I'm
> fairly certain I know the answer, but I need to be sure before I speak
> for them.
>
> Thanks!
> Connor
>
> On 4/8/20, 11:27 PM, "Christopher Egerton"  wrote:
>
> Hi Connor,
>
> Just a few more remarks!
>
> I noticed that you said "Kafka Connect was passing these exceptions 
without
> authentication." For what it's worth, the Connect REST API can be 
secured
> with TLS out-of-the-box by configuring the worker with the various 
ssl.*
> properties, but that doesn't provide any kind of authorization layer 
to
> provide levels of security depending who the user is. Just pointing 
out in
> case this helps with your use case.
>
> As far as editing the KIP based on discussion goes--it's not only
> acceptable, it's expected :) Ideally, the KIP should be kept 
up-to-date to
> the point where, were it to be accepted at any moment, it would 
accurately
> reflect the changes that would then be made to Kafka. This can be 
relaxed
> if there's rapid iteration or items that are still up for discussion, 
but
> as soon as things settle down it should be updated.
>
> As far as item 4 goes, my question was about exceptions that aren't 
handled
> by the ExceptionMapper, but which are returned as part of the 
response body
> when querying the status of a connector or task that has failed by 
querying
> the /connectors/{name}/status or 
/connectors/{name}/tasks/{taskId}/status
> endpoints. Even if the request is successful and results in an HTTP 
200
> response, the body might contain a stack trace if the connector or 
any of
> its tasks have failed.
>
> For example, I ran an instance of the FileStreamSource connector named
> "file-source" locally and instructed it to consume from a file that it
> lacked permissions to read. When I queried the status of that 
connector by
> issuing a request to /connectors/file-source/status, I got back the
> following response:
>
> {
>   "name": "file-source",
>   "connector": {
> "state": "RUNNING",
> "worker_id": "192.168.86.21:8083"
>   },
>   "tasks": [
> {
>   "id": 0,
>   "state": "FAILED",
>   "worker_id": "192.168.86.21:8083",
>   "trace": "org.apache.kafka.connect.errors.ConnectException:
> java.nio.file.AccessDeniedException: 

Re: [DISCUSS] KIP-587 Suppress detailed responses for handled exceptions in security-sensitive environments

2020-04-15 Thread Colin McCabe
Hi Connor,

I still would like to hear more about whether this feature is required for 
PCI-DSS or any other security certification.  Nobody I talked to seemed to 
think that it was-- if there are certifications that would require this, it 
would be nice to know.  However, I don't object to implementing this as long as 
we don't imply that the current mode is insecure.

What do you think about using "enable.rest.response.stack.traces" as the config 
name?  It seems like that  makes it clearer that it's a boolean value.

It's not really necessary to describe the internal implementation in the KIP, 
but since you mentioned it, it's probably worth considering using the current 
ExceptionMapper class with a different configuration rather than creating a new 
one.

best,
Colin


On Mon, Apr 13, 2020, at 09:04, Connor Penhale wrote:
> Hi Chris!
> 
> RE: SSL, indeed, the issue is not that the information is not 
> encrypted, but that there is no authorization layer.
> 
> I'll be sure to edit the KIP as we continue discussion!
> 
> RE: the 200 response you highlighted, great catch! I'll work with my 
> customer and get back to you on their audit team's intention! I'm 
> fairly certain I know the answer, but I need to be sure before I speak 
> for them.
> 
> Thanks!
> Connor
> 
> On 4/8/20, 11:27 PM, "Christopher Egerton"  wrote:
> 
> Hi Connor,
> 
> Just a few more remarks!
> 
> I noticed that you said "Kafka Connect was passing these exceptions 
> without
> authentication." For what it's worth, the Connect REST API can be secured
> with TLS out-of-the-box by configuring the worker with the various ssl.*
> properties, but that doesn't provide any kind of authorization layer to
> provide levels of security depending who the user is. Just pointing out in
> case this helps with your use case.
> 
> As far as editing the KIP based on discussion goes--it's not only
> acceptable, it's expected :) Ideally, the KIP should be kept up-to-date to
> the point where, were it to be accepted at any moment, it would accurately
> reflect the changes that would then be made to Kafka. This can be relaxed
> if there's rapid iteration or items that are still up for discussion, but
> as soon as things settle down it should be updated.
> 
> As far as item 4 goes, my question was about exceptions that aren't 
> handled
> by the ExceptionMapper, but which are returned as part of the response 
> body
> when querying the status of a connector or task that has failed by 
> querying
> the /connectors/{name}/status or /connectors/{name}/tasks/{taskId}/status
> endpoints. Even if the request is successful and results in an HTTP 200
> response, the body might contain a stack trace if the connector or any of
> its tasks have failed.
> 
> For example, I ran an instance of the FileStreamSource connector named
> "file-source" locally and instructed it to consume from a file that it
> lacked permissions to read. When I queried the status of that connector by
> issuing a request to /connectors/file-source/status, I got back the
> following response:
> 
> {
>   "name": "file-source",
>   "connector": {
> "state": "RUNNING",
> "worker_id": "192.168.86.21:8083"
>   },
>   "tasks": [
> {
>   "id": 0,
>   "state": "FAILED",
>   "worker_id": "192.168.86.21:8083",
>   "trace": "org.apache.kafka.connect.errors.ConnectException:
> java.nio.file.AccessDeniedException: test.txt\n\tat
> 
> org.apache.kafka.connect.file.FileStreamSourceTask.poll(FileStreamSourceTask.java:116)\n\tat
> 
> org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:265)\n\tat
> 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)\n\tat
> 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
> 
> org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
> 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
> 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
> 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
> java.lang.Thread.run(Thread.java:748)\nCaused by:
> java.nio.file.AccessDeniedException: test.txt\n\tat
> 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)\n\tat
> 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n\tat
> 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n\tat
> 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)\n\tat
> java.nio.file.Files.newByteChannel(Files.java:361)\n\tat
> java.nio.file.Files.newByteChannel(Files.java:407)\n\tat
> 
> 

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Jun Rao
Hi, Kowshik,

200. I guess you are saying only when the allowDowngrade field is set, the
finalized feature version can go backward. Otherwise, it can only go up.
That makes sense. It would be useful to make that clear when explaining
the usage of the allowDowngrade field. In the validation section, we have  "
/features' from {"max_version_level": X} to {"max_version_level": X’}", it
seems that we need to mention Y there.

Thanks,

Jun

On Wed, Apr 15, 2020 at 10:44 AM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> Great question! Please find my response below.
>
> > 200. My understanding is that If the CLI tool passes the
> > '--allow-downgrade' flag when updating a specific feature, then a future
> > downgrade is possible. Otherwise, the feature is now downgradable. If so,
> I
> > was wondering how the controller remembers this since it can be restarted
> > over time?
>
> (Kowshik): The purpose of the flag was to just restrict the user intent for
> a specific request.
> It seems to me that to avoid confusion, I could call the flag as
> `--try-downgrade` instead.
> Then this makes it clear, that, the controller just has to consider the ask
> from
> the user as an explicit request to attempt a downgrade.
>
> The flag does not act as an override on controller's decision making that
> decides whether
> a flag is downgradable (these decisions on whether to allow a flag to be
> downgraded
> from a specific version level, can be embedded in the controller code).
>
> Please let me know what you think.
> Sorry if I misunderstood the original question.
>
>
> Cheers,
> Kowshik
>
>
> On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:
>
> > Hi, Kowshik,
> >
> > Thanks for the reply. Makes sense. Just one more question.
> >
> > 200. My understanding is that If the CLI tool passes the
> > '--allow-downgrade' flag when updating a specific feature, then a future
> > downgrade is possible. Otherwise, the feature is now downgradable. If
> so, I
> > was wondering how the controller remembers this since it can be restarted
> > over time?
> >
> > Jun
> >
> >
> > On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam  >
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks a lot for the feedback and the questions!
> > > Please find my response below.
> > >
> > > > 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It
> > seems
> > > > that field needs to be persisted somewhere in ZK?
> > >
> > > (Kowshik): Great question! Below is my explanation. Please help me
> > > understand,
> > > if you feel there are cases where we would need to still persist it in
> > ZK.
> > >
> > > Firstly I have updated my thoughts into the KIP now, under the
> > 'guidelines'
> > > section:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows
> > >
> > > The allowDowngrade boolean field is just to restrict the user intent,
> and
> > > to remind
> > > them to double check their intent before proceeding. It should be set
> to
> > > true
> > > by the user in a request, only when the user intent is to forcefully
> > > "attempt" a
> > > downgrade of a specific feature's max version level, to the provided
> > value
> > > in
> > > the request.
> > >
> > > We can extend this safeguard. The controller (on it's end) can maintain
> > > rules in the code, that, for safety reasons would outright reject
> certain
> > > downgrades
> > > from a specific max_version_level for a specific feature. Such
> rejections
> > > may
> > > happen depending on the feature being downgraded, and from what version
> > > level.
> > >
> > > The CLI tool only allows a downgrade attempt in conjunction with
> specific
> > > flags and sub-commands. For example, in the CLI tool, if the user uses
> > the
> > > 'downgrade-all' command, or passes '--allow-downgrade' flag when
> > updating a
> > > specific feature, only then the tool will translate this ask to setting
> > > 'allowDowngrade' field in the request to the server.
> > >
> > > > 201. UpdateFeaturesResponse has the following top level fields.
> Should
> > > > those fields be per feature?
> > > >
> > > >   "fields": [
> > > > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> > > >   "about": "The error code, or 0 if there was no error." },
> > > > { "name": "ErrorMessage", "type": "string", "versions": "0+",
> > > >   "about": "The error message, or null if there was no error." }
> > > >   ]
> > >
> > > (Kowshik): Great question!
> > > As such, the API is transactional, as explained in the sections linked
> > > below.
> > > Either all provided FeatureUpdate was applied, or none.
> > > It's the reason I felt we can have just one error code + message.
> > > Happy to extend this if you feel otherwise. Please let me know.
> > >
> > > Link to sections:
> > >
> > >
> > >
> >
> 

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Kowshik Prakasam
Hi Jun,

Great question! Please find my response below.

> 200. My understanding is that If the CLI tool passes the
> '--allow-downgrade' flag when updating a specific feature, then a future
> downgrade is possible. Otherwise, the feature is now downgradable. If so,
I
> was wondering how the controller remembers this since it can be restarted
> over time?

(Kowshik): The purpose of the flag was to just restrict the user intent for
a specific request.
It seems to me that to avoid confusion, I could call the flag as
`--try-downgrade` instead.
Then this makes it clear, that, the controller just has to consider the ask
from
the user as an explicit request to attempt a downgrade.

The flag does not act as an override on controller's decision making that
decides whether
a flag is downgradable (these decisions on whether to allow a flag to be
downgraded
from a specific version level, can be embedded in the controller code).

Please let me know what you think.
Sorry if I misunderstood the original question.


Cheers,
Kowshik


On Wed, Apr 15, 2020 at 9:40 AM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the reply. Makes sense. Just one more question.
>
> 200. My understanding is that If the CLI tool passes the
> '--allow-downgrade' flag when updating a specific feature, then a future
> downgrade is possible. Otherwise, the feature is now downgradable. If so, I
> was wondering how the controller remembers this since it can be restarted
> over time?
>
> Jun
>
>
> On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Thanks a lot for the feedback and the questions!
> > Please find my response below.
> >
> > > 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It
> seems
> > > that field needs to be persisted somewhere in ZK?
> >
> > (Kowshik): Great question! Below is my explanation. Please help me
> > understand,
> > if you feel there are cases where we would need to still persist it in
> ZK.
> >
> > Firstly I have updated my thoughts into the KIP now, under the
> 'guidelines'
> > section:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows
> >
> > The allowDowngrade boolean field is just to restrict the user intent, and
> > to remind
> > them to double check their intent before proceeding. It should be set to
> > true
> > by the user in a request, only when the user intent is to forcefully
> > "attempt" a
> > downgrade of a specific feature's max version level, to the provided
> value
> > in
> > the request.
> >
> > We can extend this safeguard. The controller (on it's end) can maintain
> > rules in the code, that, for safety reasons would outright reject certain
> > downgrades
> > from a specific max_version_level for a specific feature. Such rejections
> > may
> > happen depending on the feature being downgraded, and from what version
> > level.
> >
> > The CLI tool only allows a downgrade attempt in conjunction with specific
> > flags and sub-commands. For example, in the CLI tool, if the user uses
> the
> > 'downgrade-all' command, or passes '--allow-downgrade' flag when
> updating a
> > specific feature, only then the tool will translate this ask to setting
> > 'allowDowngrade' field in the request to the server.
> >
> > > 201. UpdateFeaturesResponse has the following top level fields. Should
> > > those fields be per feature?
> > >
> > >   "fields": [
> > > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> > >   "about": "The error code, or 0 if there was no error." },
> > > { "name": "ErrorMessage", "type": "string", "versions": "0+",
> > >   "about": "The error message, or null if there was no error." }
> > >   ]
> >
> > (Kowshik): Great question!
> > As such, the API is transactional, as explained in the sections linked
> > below.
> > Either all provided FeatureUpdate was applied, or none.
> > It's the reason I felt we can have just one error code + message.
> > Happy to extend this if you feel otherwise. Please let me know.
> >
> > Link to sections:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-ChangestoKafkaController
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guarantees
> >
> > > 202. The /features path in ZK has a field min_version_level. Which API
> > and
> > > tool can change that value?
> >
> > (Kowshik): Great question! Currently this cannot be modified by using the
> > API or the tool.
> > Feature version deprecation (by raising min_version_level) can be done
> only
> > by the Controller directly. The rationale is explained in this section:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Featureversiondeprecation
> >
> >
> 

[jira] [Created] (KAFKA-9875) Flaky Test SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown[exactly_once]

2020-04-15 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9875:
--

 Summary: Flaky Test 
SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown[exactly_once]
 Key: KAFKA-9875
 URL: https://issues.apache.org/jira/browse/KAFKA-9875
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.6.0
Reporter: Sophie Blee-Goldman


h3. Stacktrace

java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out. at 
org.apache.kafka.streams.integration.utils.KafkaEmbedded.deleteTopic(KafkaEmbedded.java:211)
 at 
org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteAllTopicsAndWait(EmbeddedKafkaCluster.java:300)
 at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.cleanStateAfterTest(IntegrationTestUtils.java:148)
 at 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown(SuppressionDurabilityIntegrationTest.java:246)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7484) Fix test SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown()

2020-04-15 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-7484.

Resolution: Fixed

> Fix test 
> SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown()
> 
>
> Key: KAFKA-7484
> URL: https://issues.apache.org/jira/browse/KAFKA-7484
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Dong Lin
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.1.0
>
>
> The test 
> SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown() fails 
> in the 2.1.0 branch Jekin job. See 
> [https://builds.apache.org/job/kafka-2.1-jdk8/1/testReport/junit/org.apache.kafka.streams.integration/SuppressionDurabilityIntegrationTest/shouldRecoverBufferAfterShutdown_1__eosEnabled_true_/.|https://builds.apache.org/job/kafka-2.1-jdk8/1/testReport/junit/org.apache.kafka.streams.integration/SuppressionDurabilityIntegrationTest/shouldRecoverBufferAfterShutdown_1__eosEnabled_true_/]
> Here is the stack trace: 
> java.lang.AssertionError: Condition not met within timeout 3. Did not 
> receive all 3 records from topic output-raw-shouldRecoverBufferAfterShutdown 
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:278) at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinRecordsReceived(IntegrationTestUtils.java:462)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinRecordsReceived(IntegrationTestUtils.java:343)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.verifyKeyValueTimestamps(IntegrationTestUtils.java:543)
>  at 
> org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest.verifyOutput(SuppressionDurabilityIntegrationTest.java:239)
>  at 
> org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown(SuppressionDurabilityIntegrationTest.java:206)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-7484) Fix test SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown()

2020-04-15 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman reopened KAFKA-7484:


failed on a PR with
h3. Stacktrace

java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out. at 
org.apache.kafka.streams.integration.utils.KafkaEmbedded.deleteTopic(KafkaEmbedded.java:211)
 at 
org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteAllTopicsAndWait(EmbeddedKafkaCluster.java:300)
 at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.cleanStateAfterTest(IntegrationTestUtils.java:148)

> Fix test 
> SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown()
> 
>
> Key: KAFKA-7484
> URL: https://issues.apache.org/jira/browse/KAFKA-7484
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Dong Lin
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.1.0
>
>
> The test 
> SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown() fails 
> in the 2.1.0 branch Jekin job. See 
> [https://builds.apache.org/job/kafka-2.1-jdk8/1/testReport/junit/org.apache.kafka.streams.integration/SuppressionDurabilityIntegrationTest/shouldRecoverBufferAfterShutdown_1__eosEnabled_true_/.|https://builds.apache.org/job/kafka-2.1-jdk8/1/testReport/junit/org.apache.kafka.streams.integration/SuppressionDurabilityIntegrationTest/shouldRecoverBufferAfterShutdown_1__eosEnabled_true_/]
> Here is the stack trace: 
> java.lang.AssertionError: Condition not met within timeout 3. Did not 
> receive all 3 records from topic output-raw-shouldRecoverBufferAfterShutdown 
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:278) at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinRecordsReceived(IntegrationTestUtils.java:462)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinRecordsReceived(IntegrationTestUtils.java:343)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.verifyKeyValueTimestamps(IntegrationTestUtils.java:543)
>  at 
> org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest.verifyOutput(SuppressionDurabilityIntegrationTest.java:239)
>  at 
> org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown(SuppressionDurabilityIntegrationTest.java:206)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-15 Thread Jun Rao
Hi, Kowshik,

Thanks for the reply. Makes sense. Just one more question.

200. My understanding is that If the CLI tool passes the
'--allow-downgrade' flag when updating a specific feature, then a future
downgrade is possible. Otherwise, the feature is now downgradable. If so, I
was wondering how the controller remembers this since it can be restarted
over time?

Jun


On Tue, Apr 14, 2020 at 6:49 PM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> Thanks a lot for the feedback and the questions!
> Please find my response below.
>
> > 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It seems
> > that field needs to be persisted somewhere in ZK?
>
> (Kowshik): Great question! Below is my explanation. Please help me
> understand,
> if you feel there are cases where we would need to still persist it in ZK.
>
> Firstly I have updated my thoughts into the KIP now, under the 'guidelines'
> section:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows
>
> The allowDowngrade boolean field is just to restrict the user intent, and
> to remind
> them to double check their intent before proceeding. It should be set to
> true
> by the user in a request, only when the user intent is to forcefully
> "attempt" a
> downgrade of a specific feature's max version level, to the provided value
> in
> the request.
>
> We can extend this safeguard. The controller (on it's end) can maintain
> rules in the code, that, for safety reasons would outright reject certain
> downgrades
> from a specific max_version_level for a specific feature. Such rejections
> may
> happen depending on the feature being downgraded, and from what version
> level.
>
> The CLI tool only allows a downgrade attempt in conjunction with specific
> flags and sub-commands. For example, in the CLI tool, if the user uses the
> 'downgrade-all' command, or passes '--allow-downgrade' flag when updating a
> specific feature, only then the tool will translate this ask to setting
> 'allowDowngrade' field in the request to the server.
>
> > 201. UpdateFeaturesResponse has the following top level fields. Should
> > those fields be per feature?
> >
> >   "fields": [
> > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> >   "about": "The error code, or 0 if there was no error." },
> > { "name": "ErrorMessage", "type": "string", "versions": "0+",
> >   "about": "The error message, or null if there was no error." }
> >   ]
>
> (Kowshik): Great question!
> As such, the API is transactional, as explained in the sections linked
> below.
> Either all provided FeatureUpdate was applied, or none.
> It's the reason I felt we can have just one error code + message.
> Happy to extend this if you feel otherwise. Please let me know.
>
> Link to sections:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-ChangestoKafkaController
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guarantees
>
> > 202. The /features path in ZK has a field min_version_level. Which API
> and
> > tool can change that value?
>
> (Kowshik): Great question! Currently this cannot be modified by using the
> API or the tool.
> Feature version deprecation (by raising min_version_level) can be done only
> by the Controller directly. The rationale is explained in this section:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Featureversiondeprecation
>
>
> Cheers,
> Kowshik
>
> On Tue, Apr 14, 2020 at 5:33 PM Jun Rao  wrote:
>
> > Hi, Kowshik,
> >
> > Thanks for addressing those comments. Just a few more minor comments.
> >
> > 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It seems
> > that field needs to be persisted somewhere in ZK?
> >
> > 201. UpdateFeaturesResponse has the following top level fields. Should
> > those fields be per feature?
> >
> >   "fields": [
> > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> >   "about": "The error code, or 0 if there was no error." },
> > { "name": "ErrorMessage", "type": "string", "versions": "0+",
> >   "about": "The error message, or null if there was no error." }
> >   ]
> >
> > 202. The /features path in ZK has a field min_version_level. Which API
> and
> > tool can change that value?
> >
> > Jun
> >
> > On Mon, Apr 13, 2020 at 5:12 PM Kowshik Prakasam  >
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the feedback! I have updated the KIP-584 addressing your
> > > comments.
> > > Please find my response below.
> > >
> > > > 100.6 You can look for the sentence "This operation requires ALTER on
> > > > CLUSTER." in KIP-455. Also, you can check its usage in
> > > > KafkaApis.authorize().
> > >
> > > (Kowshik): Done. Great 

Jenkins build is back to normal : kafka-trunk-jdk8 #4435

2020-04-15 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-8870) Prevent dirty reads of Streams state store from Interactive queries

2020-04-15 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-8870.

Resolution: Duplicate

> Prevent dirty reads of Streams state store from Interactive queries
> ---
>
> Key: KAFKA-8870
> URL: https://issues.apache.org/jira/browse/KAFKA-8870
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Vinoth Chandar
>Assignee: Boyang Chen
>Priority: Major
>
> Today, Interactive Queries (IQ) against Streams state store could see 
> uncommitted data, even with EOS processing guarantees (these are actually 
> orthogonal, but clarifying since EOS may give the impression that everything 
> is dandy). This is causes primarily because state updates in rocksdb are 
> visible even before the kafka transaction is committed. Thus, if the instance 
> fails, then the failed over instance will redo the uncommited old transaction 
> and the following could be possible during recovery,.
> Value for key K can go from *V0 → V1 → V2* on active instance A, IQ reads V1, 
> instance A fails and any failure/rebalancing will leave the standy instance B 
> rewinding offsets and reprocessing, during which time IQ can again see V0 or 
> V1 or any number of previous values for the same key.
> In this issue, we will plan work towards providing consistency for IQ, for a 
> single row in a single state store. i.e once a query sees V1, it can only see 
> either V1 or V2.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9764) Deprecate Stream Simple benchmark

2020-04-15 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-9764.

Resolution: Fixed

> Deprecate Stream Simple benchmark
> -
>
> Key: KAFKA-9764
> URL: https://issues.apache.org/jira/browse/KAFKA-9764
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Over the years, we are seeing this simple benchmark suite to be less valuable 
> over time. It is built on Jenkins infra which could not guarantee consistent 
> result out of each run, and most times could not bring in any insights as 
> well. In order to avoid wasting developer's time for testing performance 
> against this poor setup, we will remove the test suite as it is no longer 
> valuable to the community.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9793) Stream HandleAssignment should guarantee task close

2020-04-15 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-9793.

Resolution: Fixed

> Stream HandleAssignment should guarantee task close
> ---
>
> Key: KAFKA-9793
> URL: https://issues.apache.org/jira/browse/KAFKA-9793
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> When triggering the `handleAssignment` call, if task preCommit throws, the 
> doom-to-fail task shall not be closed, thus causing a RocksDB metrics 
> recorder re-addition, which is fatal:
>  
>  
> [2020-03-31T16:50:43-07:00] 
> (streams-soak-trunk-eos_soak_i-022f109d75764a250_streamslog) [2020-03-31 
> 23:50:42,668] INFO 
> [stream-soak-test-714fba71-3f5c-4418-8613-22d7b085949c-StreamThread-3] 
> stream-thread 
> [stream-soak-test-714fba71-3f5c-4418-8613-22d7b085949c-StreamThread-3] Handle 
> new assignment with:
>         New active tasks: [1_0, 0_1, 2_0]
>         New standby tasks: []
>         Existing active tasks: [0_1, 1_0, 2_0, 3_1]
>         Existing standby tasks: [] 
> (org.apache.kafka.streams.processor.internals.TaskManager)
>  
> [2020-03-31T16:50:43-07:00] 
> (streams-soak-trunk-eos_soak_i-022f109d75764a250_streamslog) [2020-03-31 
> 23:50:42,671] INFO 
> [stream-soak-test-714fba71-3f5c-4418-8613-22d7b085949c-StreamThread-3] 
> stream-thread 
> [stream-soak-test-714fba71-3f5c-4418-8613-22d7b085949c-StreamThread-3] task 
> [3_1] Prepared clean close 
> (org.apache.kafka.streams.processor.internals.StreamTask)
> [2020-03-31T16:50:43-07:00] 
> (streams-soak-trunk-eos_soak_i-022f109d75764a250_streamslog) [2020-03-31 
> 23:50:42,671] INFO 
> [stream-soak-test-714fba71-3f5c-4418-8613-22d7b085949c-StreamThread-3] 
> stream-thread 
> [stream-soak-test-714fba71-3f5c-4418-8613-22d7b085949c-StreamThread-3] task 
> [0_1] Prepared task for committing 
> (org.apache.kafka.streams.processor.internals.StreamTask)
> [2020-03-31T16:50:43-07:00] 
> (streams-soak-trunk-eos_soak_i-022f109d75764a250_streamslog) [2020-03-31 
> 23:50:42,682] ERROR 
> [stream-soak-test-714fba71-3f5c-4418-8613-22d7b085949c-StreamThread-3] 
> stream-thread 
> [stream-soak-test-714fba71-3f5c-4418-8613-22d7b085949c-StreamThread-3] task 
> [1_0] Failed to flush state store logData10MinuteFinalCount-store:  
> (org.apache.kafka.streams.processor.internals.ProcessorStateManager)
> [2020-03-31T16:50:43-07:00] 
> (streams-soak-trunk-eos_soak_i-022f109d75764a250_streamslog) 
> org.apache.kafka.streams.errors.TaskMigratedException: Error encountered 
> sending record to topic windowed-node-counts for task 1_0 due to:
> [2020-03-31T16:50:43-07:00] 
> (streams-soak-trunk-eos_soak_i-022f109d75764a250_streamslog) 
> org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an 
> operation with an old epoch. Either there is a newer producer with the same 
> transactionalId, or the producer's transaction has been expired by the broker.
> [2020-03-31T16:50:43-07:00] 
> (streams-soak-trunk-eos_soak_i-022f109d75764a250_streamslog) Written offsets 
> would not be recorded and no more records would be sent since the producer is 
> fenced, indicating the task may be migrated out; it means all tasks belonging 
> to this thread should be migrated.
>         at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:202)
>         at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.lambda$send$0(RecordCollectorImpl.java:185)
>         at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1352)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231)
>         at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.abort(ProducerBatch.java:159)
>         at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:768)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.maybeAbortBatches(Sender.java:485)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:304)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
>         at java.lang.Thread.run(Thread.java:748)
>  
> The correct solution is to wrap the whole code block by try-catch to avoid 
> unexpected close failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9873) DNS故障时服务处理线程hang住

2020-04-15 Thread zhang chao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhang chao resolved KAFKA-9873.
---
Resolution: Duplicate

> DNS故障时服务处理线程hang住
> -
>
> Key: KAFKA-9873
> URL: https://issues.apache.org/jira/browse/KAFKA-9873
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.4.1
>Reporter: zhang chao
>Priority: Major
>  Labels: DNS, acl, dns
> Attachments: kast.log
>
>
> 如附件所示,开启安全认证后,acl鉴权失败,导致所有处理线程无法工作



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1356

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9797; Fix


--
[...truncated 3.04 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED


[jira] [Created] (KAFKA-9874) broker can not work when use dns fault

2020-04-15 Thread para (Jira)
para created KAFKA-9874:
---

 Summary: broker can not work when use dns fault
 Key: KAFKA-9874
 URL: https://issues.apache.org/jira/browse/KAFKA-9874
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.4.1
Reporter: para
 Attachments: kast.log

in 2.3.1 we authenticate using sasl blocked when the dns service is 
fault,caused by java native func getHostByAddr.

but the hostname was never used, can use the default name instead of it

 
h3.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9797) Fix flaky system test TestSecurityRollingUpgrade.test_enable_separate_interbroker_listener

2020-04-15 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-9797.
---
  Reviewer: Manikumar
Resolution: Fixed

> Fix flaky system test 
> TestSecurityRollingUpgrade.test_enable_separate_interbroker_listener
> --
>
> Key: KAFKA-9797
> URL: https://issues.apache.org/jira/browse/KAFKA-9797
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.6.0
>
>
> TestSecurityRollingUpgrade.test_enable_separate_interbroker_listener is 
> supposed to test non-disruptive upgrade of inter-broker listener using 
> rolling bounce. But the test updates inter-broker listener without enabling 
> the new listener/security_protocol across all brokers, making this a 
> disruptive upgrade where brokers are in inconsistent state until all brokers 
> have been upgraded. The test must first enable the new listener across all 
> brokers and then update the inter-broker listener to the new listener to 
> ensure that the cluster is functioning during the upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9873) DNS故障时服务处理线程hang住

2020-04-15 Thread zhang chao (Jira)
zhang chao created KAFKA-9873:
-

 Summary: DNS故障时服务处理线程hang住
 Key: KAFKA-9873
 URL: https://issues.apache.org/jira/browse/KAFKA-9873
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.4.1
Reporter: zhang chao
 Attachments: kast.log

如附件所示,开启安全认证后,acl鉴权失败,导致所有处理线程无法工作



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1355

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[gwen] MINOR: cleaner resume log message is misleading

[github] [KAFKA-9826] Handle an unaligned first dirty offset during log 
cleaning.


--
[...truncated 3.04 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4434

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[gwen] MINOR: cleaner resume log message is misleading

[github] [KAFKA-9826] Handle an unaligned first dirty offset during log 
cleaning.


--
[...truncated 6.05 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> 

Build failed in Jenkins: kafka-trunk-jdk11 #1354

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8611: Refactor KStreamRepartitionIntegrationTest (#8470)


--
[...truncated 6.08 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

[jira] [Resolved] (KAFKA-9664) Flaky Test KafkaStreamsTest#testStateThreadClose

2020-04-15 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-9664.
--
Resolution: Cannot Reproduce

> Flaky Test KafkaStreamsTest#testStateThreadClose
> 
>
> Key: KAFKA-9664
> URL: https://issues.apache.org/jira/browse/KAFKA-9664
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: flaky-test
>
> org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose 
> FAILED*14:23:21*  java.lang.AssertionError: Condition not met within 
> timeout 1. Thread never stopped.*14:23:21*  at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:278)*14:23:21*
>   at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:204)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9872) After restart kafka by a abnormal way,kafkaconsumer cannot read messages

2020-04-15 Thread Lee chen (Jira)
Lee chen created KAFKA-9872:
---

 Summary: After  restart kafka by a  abnormal way,kafkaconsumer 
cannot read messages
 Key: KAFKA-9872
 URL: https://issues.apache.org/jira/browse/KAFKA-9872
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.0
Reporter: Lee chen


Recently, I face a problem of a broker  restart  by server shutdown directly. 
when I restart the broker, I found my consumer cannot read data. 
1,My consumer group named "0",and The partitions of  __consumer_offsets is 50.
2,As the Calculation results of kafka. The partition of  consumer "0" is  in  
partition-48
3, kafkaConsumer belong to consumer group “0”  cannot  read datas ,but when I 
change to another group which not in partition-48, all consumers are back to 
normal.It seems that* the log dir of partition-48 belong to __consumer_offsets 
is  damaged*.
4,When I  check the  log dir of __consumer_offsets-48,I* found so many  .snap 
files. *
5,I found the nummber recorded  in latest log file in __consumer_offsets-48  is 
smaller than the nummber recorded  in recovery-point-offset-checkpoint and 
replication-offset-checkpoint
6,I use  kafka-topics.sh to describe __consumer_offsets is normal,All replica 
is be contained in ISR.

As my understanding,If log dir is damaged, the broker may be cannot restart. 





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #187

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Upgrade ducktape to 0.7.7 (#8487)


--
[...truncated 7.05 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest 

Jenkins build is back to normal : kafka-trunk-jdk8 #4433

2020-04-15 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.5-jdk8 #93

2020-04-15 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Upgrade ducktape to 0.7.7 (#8487)


--
[...truncated 5.88 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false]