[jira] [Created] (KAFKA-9521) Flaky Test DeleteConsumerGroupsTest#testDeleteCmdEmptyGroup

2020-02-06 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9521:
--

 Summary: Flaky Test 
DeleteConsumerGroupsTest#testDeleteCmdEmptyGroup
 Key: KAFKA-9521
 URL: https://issues.apache.org/jira/browse/KAFKA-9521
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Sophie Blee-Goldman


Failed for me locally. Lost the actual stack trace/output but the error was 
"java.lang.AssertionError: The group did not become empty as expected"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8737) TaskMigrated Exception while rebalancing kafka streams

2020-02-06 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8737.
--
  Assignee: Guozhang Wang  (was: Bill Bejeck)
Resolution: Duplicate

> TaskMigrated Exception while rebalancing kafka streams
> --
>
> Key: KAFKA-8737
> URL: https://issues.apache.org/jira/browse/KAFKA-8737
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0, 1.0.1
> Environment: 20 partitions 
> 1 topic 
> 8 Streamer service 
> topic-region-1 9  7841726 8236017 
> 394291 
> streams-subscriberstopic-region-1-29d615ed-4243-4b9d-90b7-9c517aa0f2e3-StreamThread-1-consumer-0276e83d-40b5-4b44-b764-7d29e0dab663/
>  
> streams-subscriberstopic-region-1-29d615ed-4243-4b9d-90b7-9c517aa0f2e3-StreamThread-1-consumer
> topic-region-1 15 7421710 7467666 
> 45956  
> streams-subscriberstopic-region-1-29d615ed-4243-4b9d-90b7-9c517aa0f2e3-StreamThread-1-consumer-0276e83d-40b5-4b44-b764-7d29e0dab663/
>  
> streams-subscriberstopic-region-1-29d615ed-4243-4b9d-90b7-9c517aa0f2e3-StreamThread-1-consumer
> topic-region-1 19 7737360 8120611 
> 383251 
> streams-subscriberstopic-region-1-29d615ed-4243-4b9d-90b7-9c517aa0f2e3-StreamThread-1-consumer-0276e83d-40b5-4b44-b764-7d29e0dab663/
> 
> streams-subscriberstopic-region-1-29d615ed-4243-4b9d-90b7-9c517aa0f2e3-StreamThread-1-consumer
> topic-region-1
>Reporter: KUMAR
>Assignee: Guozhang Wang
>Priority: Major
>
> Kafka  streams throws following exception while restart of a stream client 
> service - 
> o.a.k.s.p.internals.StreamThread.? - stream-thread 
> [streams-subscriberstopic-region-1-32d968e3-f892-4772-a7a4-6f684d7e43c9-StreamThread-1]
>  Detected a task that got migrated to another thread. This implies that this 
> thread missed a rebalance and dropped out of the consumer group. Trying to 
> rejoin the consumer group now.
> org.apache.kafka.streams.errors.TaskMigratedException: Log end offset of 
> topic-region-1-12 should not change while restoring: old end offset 6286727, 
> current offset 6380997
>  
> Kafka version is 1.0.0 and we have back merged the fix for KIP-6269-
> [https://github.com/apache/kafka/pull/4300/files#|https://github.com/apache/kafka/pull/4300/files]
> However we observe that there seems to be an issue in rebalance when 
> "auto.offset.reset" is configured as "latest". Based on log analysis we see 
> following behavior - 
>  # StreamThread starts a restore consumer 
>  # While Fetching it gets offset out of range                               
> o.a.k.c.consumer.internals.Fetcher.? - [Consumer 
> clientId=streams-subscriberstopic-region-1-11b2d7fb-11ce-4b0b-a40a-388d3c7b6bc9-StreamThread-1-restore-
>  consumer, groupId=] Fetch READ_UNCOMMITTED at offset 246431 for partition 
> topic-region-1-12 returned fetch data (error=OFFSET_OUT_OF_RANGE, 
> highWaterMark=-1, lastStableOffset = -1, logStartOffset = -1,
>  abortedTransactions = null, recordsSizeInBytes=0) 
>  # Fetcher tries to reset the offset 
>  # While reset the offset it appears it is changing the offset position and 
> causing TaskMigrated exception
> Above test repeated with "auto.offset.reset" is configured as "earliest" does 
> not throw any TaskMigrated exception as in earliest case we are not reseting 
> the restore consumer position.
>  
> Please let us know if this is possible and if a fix would be needed for the 
> offset reset piece when set to latest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-02-06 Thread Richard Yu
Hi all,

I've had just a few thoughts regarding the forwarding of >. As Matthias already mentioned, there are two
separate priorities by which we can judge this KIP:

1. A optimization perspective: In this case, the user would prefer the
impact of this KIP to be as minimal as possible. By such logic, if
stateless operations are performed twice, that could prove unacceptable for
them. (since operations can prove expensive)

2. Semantics correctness perspective: Unlike the optimization approach, we
are more concerned with all KTable operations obeying the same emission
policy. i.e. emit on change. In this case, a discrepancy would not be
tolerated, even though an extra performance cost will be incurred.
Therefore, we will follow Matthias's approach, and then perform the
operation once on the old value, and once on the new.

The issue here I think is more black and white than in between. The second
option in particular would be favorable for users with inexpensive
stateless operations, while for the former option, we are probably dealing
with more expensive ones. So the simplest solution is probably to allow the
user to choose one of the behaviors, and have a config which can switch in
between them.

Its the simplest compromise I can come up with at the moment, but if you
think you have a better plan which could better balance tradeoffs. Then
please let us know. :)

Best,
Richard

On Wed, Feb 5, 2020 at 5:12 PM John Roesler  wrote:

> Hi all,
>
> Thanks for the thoughtful comments!
>
> I need more time to reflect on your thoughts, but just wanted to offer
> a quick clarification about equals().
>
> I only meant that we can't be sure if a class's equals() implementation
> returns true for two semantically identical instances. I.e., if a class
> doesn't
> override the default equals() implementation, then we would see behavior
> like:
>
> new MyPair("A", 1).equals(new MyPair("A", 1)) returns false
>
> In that case, I would still like to catch no-op updates by comparing the
> serialized form of the records when we happen to have it serialized anyway
> (such as when the operation is stateful, or when we're sending to a
> repartition topic and we have both the "new" and "old" value from
> upstream).
>
> I didn't mean to suggest we'd try to use reflection to detect whether
> equals
> is implemented, although that is a neat trick. I was thinking more of a
> belt-and-suspenders algorithm where we do the check for no-ops based on
> equals() and then _also_ check the serialized bytes for equality.
>
> Thanks,
> -John
>
> On Wed, Feb 5, 2020, at 15:31, Ted Yu wrote:
> > Thanks for the comments, Matthias.
> >
> > w.r.t. requirement of an `equals()` implementation, each template type
> > would have an equals() method. We can use the following code to know
> > whether it is provided by JVM or provided by user.
> >
> > boolean customEquals = false;
> > try {
> > Class cls = value.getClass().getMethod("equals",
> > Object.class).getDeclaringClass();
> > if (!Object.class.equals(cls)) {
> > customEquals = true;
> > }
> > } catch (NoSuchMethodException nsme) {
> > // equals is always defined, this wouldn't hit
> > }
> >
> > The next question is: what if the user doesn't provide equals() method ?
> > Would we automatically fall back to emit-on-update ?
> >
> > Cheers
> >
> > On Tue, Feb 4, 2020 at 1:37 PM Matthias J. Sax  wrote:
> >
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA512
> > >
> > > First a high level comment:
> > >
> > > Overall, I would like to make one step back, and make sure we are
> > > discussion on the same level. Originally, I understood this KIP as a
> > > proposed change of _semantics_, however, given the latest discussion
> > > it seems it's actually not -- it's more an _optimization_ proposal.
> > > Hence, we only need to make sure that this optimization does not break
> > > existing semantics. It this the right way to think about it?
> > >
> > > If yes, than it might actually be ok to have different behavior
> > > depending if there is a materialized KTable or not. So far, we never
> > > defined a public contract about our emit strategy and it seems this
> > > KIP does not define one either.
> > >
> > > Hence, I don't have as strong of an opinion about sending oldValues
> > > for example any longer. I guess the question is really, what can we
> > > implement in a reasonable way.
> > >
> > >
> > >
> > > Other comments:
> > >
> > >
> > > @Richard:
> > >
> > > Can you please add the KIP to the KIP overview table: It's missing
> > > (
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Pro
> > > posals).
> > >
> > >
> > > @Bruno:
> > >
> > > You mentioned caching. I think it's irrelevant (orthogonal) and we can
> > > discuss this KIP without considering it.
> > >
> > >
> > > @John:
> > >
> > > > Even in the source table, we forward the updated record with the
> > > > higher of the two timestamps. So the example is more like:
> > >
> > > That is not correct

[jira] [Resolved] (KAFKA-8940) Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance

2020-02-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8940.

Resolution: Fixed

Nevermind. The test is fine. The PR broke the test. Can reproduce it locally. 
Hence, investigating the PR now.

> Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance
> -
>
> Key: KAFKA-8940
> URL: https://issues.apache.org/jira/browse/KAFKA-8940
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.5.0
>
>
> I lost the screen shot unfortunately... it reports the set of expected 
> records does not match the received records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9520) Deprecate ZooKeeper access for kafka-reassign-partitions.sh

2020-02-06 Thread Sanjana Kaundinya (Jira)
Sanjana Kaundinya created KAFKA-9520:


 Summary: Deprecate ZooKeeper access for 
kafka-reassign-partitions.sh
 Key: KAFKA-9520
 URL: https://issues.apache.org/jira/browse/KAFKA-9520
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 2.5.0
Reporter: Sanjana Kaundinya
 Fix For: 2.5.0


As part of KIP-555 access for zookeeper must be deprecated  for 
kafka-reassign-partitions.sh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9519) Deprecate ZooKeeper access for kafka-configs.sh

2020-02-06 Thread Sanjana Kaundinya (Jira)
Sanjana Kaundinya created KAFKA-9519:


 Summary: Deprecate ZooKeeper access for kafka-configs.sh   
 Key: KAFKA-9519
 URL: https://issues.apache.org/jira/browse/KAFKA-9519
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 2.5.0
Reporter: Sanjana Kaundinya
Assignee: Sanjana Kaundinya
 Fix For: 2.5.0


As part of KIP-555 access for zookeeper must be deprecated  for 
kafka-configs.sh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-518: Allow listing consumer groups per state

2020-02-06 Thread Colin McCabe
Hi Mickael,

Thanks for the KIP.  I left a comment on the DISCUSS thread as well.

best,
Colin


On Thu, Feb 6, 2020, at 08:58, Mickael Maison wrote:
> Hi Manikumar,
> 
> I believe I've answered David's comments in the DISCUSS thread.
> Thanks
> 
> On Wed, Jan 15, 2020 at 10:15 AM Manikumar  wrote:
> >
> > Hi Mickael,
> >
> > Thanks for the KIP.  Can you respond to the comments from David on discuss
> > thread?
> >
> > Thanks,
>


Re: [DISCUSS] KIP-518: Allow listing consumer groups per state

2020-02-06 Thread Colin McCabe
Hi Mickael,

Can you please specify what the result is when a newer client tries to use this 
on an older broker?  Does that result in an UnsupportedVersionException?

I would prefer an Optional in the Java API so that “show all groups” can be 
EMPTY.

best,
Colin


On Mon, Jan 27, 2020, at 07:53, Mickael Maison wrote:
> Hi David,
> 
> Did that answer your questions? or do you have any further feedback?
> 
> Thanks
> 
> On Thu, Jan 16, 2020 at 4:11 PM Mickael Maison  
> wrote:
> >
> > Hi David,
> >
> > Thanks for taking a look.
> > 1) Yes, updated
> >
> > 2) I had not considered that but indeed this would be useful if the
> > request contained multiple states and would avoid doing another call.
> > The ListGroups response already includes the group ProtocolType, so I
> > guess we could add the State as well. The response will still be
> > significantly smaller than DescribeGroups. With this change, one thing
> > to note is that having Describe on the Cluster resource will allow
> > retrieving the state of all groups. Currently retrieving the state of
> > a group requires Describe on the Group.
> >
> > 3) Yes if ListGroups response includes the state, it makes sense to
> > expose it via the command line tool and the AdminClient. With
> > ConsumerGroupCommand, to avoid compatibility issues we can only print
> > states when the --states flag is specified.
> >
> > I've updated the KIP accordingly.
> >
> > On Mon, Jan 13, 2020 at 12:20 PM David Jacot  wrote:
> > >
> > > Hi Michael,
> > >
> > > Please, excuse me for my late feedback. I've got a few questions/comments
> > > while reviewing the KIP.
> > >
> > > 1. I would suggest to clearly state in the documentation of the state 
> > > field
> > > that omitting it or providing an empty list means "all".
> > >
> > > 2. Have you considered including the state in the response? The API allows
> > > to search for multiple states so it could be
> > > convenient to have the state in the response to let the user differentiate
> > > the groups.
> > >
> > > 3. If 2. makes sense, I would suggest to also include it in the 
> > > information
> > > printed out by the ConsumerGroupCommand tool. Putting
> > > myself in the shoes of an operator, I would like to see the state of each
> > > group if I select specific states. Perhaps we could
> > > use a table instead of the simple list used today. What do you think?
> > >
> > > Thanks for the KIP!
> > >
> > > Best,
> > > David
> > >
> > > On Mon, Nov 11, 2019 at 12:40 PM Mickael Maison 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > If there's no more feedback, I'll open a vote in the next few days.
> > > >
> > > > Thanks
> > > >
> > > >
> > > > On Fri, Nov 1, 2019 at 4:27 PM Mickael Maison 
> > > > wrote:
> > > > >
> > > > > Hi Tom,
> > > > >
> > > > > Thanks for taking a look at the KIP!
> > > > > You are right, even if we serialize the field as a String, we should
> > > > > use ConsumerGroupState in the API.
> > > > > As suggested, I've also updated the API so a list of states is 
> > > > > specified.
> > > > >
> > > > > Regards,
> > > > >
> > > > >
> > > > > On Tue, Oct 22, 2019 at 10:03 AM Tom Bentley 
> > > > wrote:
> > > > > >
> > > > > > Hi Mickael,
> > > > > >
> > > > > > Thanks for the KIP.
> > > > > >
> > > > > > The use of String to represent the desired state in the API seems 
> > > > > > less
> > > > > > typesafe than would be ideal. Is there a reason not to use the 
> > > > > > existing
> > > > > > ConsumerGroupState enum (even if the state is serialized as a 
> > > > > > String)?
> > > > > >
> > > > > > While you say that the list-of-names result from listConsumerGroups 
> > > > > > is
> > > > a
> > > > > > reason not to support supplying a set of desired states I don't find
> > > > that
> > > > > > argument entirely convincing. Sure, if the results are going to be
> > > > shown to
> > > > > > a user then it would be ambiguous and multiple queries would be
> > > > needed. But
> > > > > > it seems quite possible that the returned list of groups will
> > > > immediately
> > > > > > be used in a describeConsumerGroups query (for example, so show a 
> > > > > > user
> > > > > > additional information about the groups of interest, for example). 
> > > > > > In
> > > > that
> > > > > > case the grouping by state could be done on the descriptions, and 
> > > > > > some
> > > > RPCs
> > > > > > could be avoided. It would also avoid the race inherent in making
> > > > multiple
> > > > > > listConsumerGroups requests. So supporting a set of states isn't
> > > > entirely
> > > > > > worthless and it wouldn't really add very much complexity.
> > > > > >
> > > > > > Kind regards,
> > > > > >
> > > > > > Tom
> > > > > >
> > > > > > On Mon, Oct 21, 2019 at 5:54 PM Mickael Maison <
> > > > mickael.mai...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Bump
> > > > > > > Now that the rush for 2.4.0 is ending I hope to get some feedback
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > On Mon, Sep 9, 2019 at 

Build failed in Jenkins: kafka-2.3-jdk8 #170

2020-02-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update authorization primitives in security.html (#7508)


--
[...truncated 2.95 MB...]

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord PASSED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots STARTED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots PASSED

kafka.log.ProducerStateManagerTest > testStartOffset STARTED

kafka.log.ProducerStateManagerTest > testStartOffset PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
PASSED

kafka.log.ProducerStateManagerTest > testTruncateHead STARTED

kafka.log.ProducerStateManagerTest > testTruncateHead PASSED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[20] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[20] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[21] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[21] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[22] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[22] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[23] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[23] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[24] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[24] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[25] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideC

[jira] [Created] (KAFKA-9518) NullPointerException on out-of-order topologies

2020-02-06 Thread Murilo Tavares (Jira)
Murilo Tavares created KAFKA-9518:
-

 Summary: NullPointerException on out-of-order topologies
 Key: KAFKA-9518
 URL: https://issues.apache.org/jira/browse/KAFKA-9518
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.3.1, 2.4.0
Reporter: Murilo Tavares
 Attachments: kafka-streams-testing.zip

I have a KafkaStreams that dinamically builds a topology based on a Map of 
input-to-output topics. Since the map was not sorted, iteration was 
unpredictable, and different instances could have different orders. When this 
happen, KafkaStreams throws an exception during REBALANCE.

 

I was able to reproduce this using the attached java project. The project is a 
pretty simple Maven project with one class. It starts 2 instances in parallel, 
with the same input-to-output topics, but one instance takes the topics in a 
reversed order.

 

The exception is this:
{noformat}
*no* further _formatting_ is done here{noformat}
Exception in thread 
"MY-APP-81e9dc0b-1459-4499-93d6-b5c03da60e18-StreamThread-1" 
org.apache.kafka.streams.errors.StreamsException: stream-thread 
[MY-APP-81e9dc0b-1459-4499-93d6-b5c03da60e18-StreamThread-1] Failed to 
rebalance. at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:852)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:743)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671)
 Caused by: java.lang.NullPointerException at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:234)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:176)
 at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:355)
 at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:313)
 at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:298)
 at 
org.apache.kafka.streams.processor.internals.TaskManager.addNewActiveTasks(TaskManager.java:160)
 at 
org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:120)
 at 
org.apache.kafka.streams.processor.internals.StreamsRebalanceListener.onPartitionsAssigned(StreamsRebalanceListener.java:77)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:272)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:400)
 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:421)
 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) 
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) 
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:843)
 ... 3 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9517) KTable Joins Without Materialized Argument Yield Results That Further Joins NPE On

2020-02-06 Thread Paul Snively (Jira)
Paul Snively created KAFKA-9517:
---

 Summary: KTable Joins Without Materialized Argument Yield Results 
That Further Joins NPE On
 Key: KAFKA-9517
 URL: https://issues.apache.org/jira/browse/KAFKA-9517
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.4.0
Reporter: Paul Snively


The `KTable` API implemented 
[here|[https://github.com/apache/kafka/blob/2.4.0/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KTableImpl.java#L842-L844]]
 calls `doJoinOnForeignKey` with an argument of `Materialized.with(null, 
null)`, as apparently do several other APIs. As the comment spanning [these 
lines|[https://github.com/apache/kafka/blob/2.4.0/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KTableImpl.java#L1098-L1099]]
 makes clear, the result is a `KTable` whose `valueSerde` (as a `KTableImpl`) 
is `null`. Therefore, attempts to `join` etc. on the resulting `KTable` fail 
with a `NullPointerException`.

While there is an obvious workaround—explicitly construct the required 
`Materialized` and use the APIs that take it as an argument—I have to admit I 
find the existence of public APIs with this sort of bug, particularly when the 
bug is literally documented as a comment in the source code, astonishing to the 
point of incredulity. It calls the quality and trustworthiness of Kafka Streams 
into serious question, and if a resolution is not forthcoming within a week, we 
will be left with no other option but to consider technical alternatives.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-545 support automated consumer offset sync across clusters in MM 2.0

2020-02-06 Thread Andrew Schofield
+1 (non-binding)
Thanks for the KIP.

On 06/02/2020, 17:06, "Edoardo Comar"  wrote:

+1 (non-binding)
thanks for the KIP !

On Tue, 14 Jan 2020 at 13:57, Navinder Brar 

wrote:

> +1 (non-binding)
> Navinder
> On Tuesday, 14 January, 2020, 07:24:02 pm IST, Ryanne Dolan <
> ryannedo...@gmail.com> wrote:
>
>  Bump. We've got 4 non-binding and one binding vote.
>
> Ryanne
>
> On Fri, Dec 13, 2019, 1:44 AM Tom Bentley  wrote:
>
> > +1 (non-binding)
> >
> > On Thu, Dec 12, 2019 at 6:33 PM Andrew Schofield <
> > andrew_schofi...@live.com>
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > On 12/12/2019, 14:20, "Mickael Maison" 
> > wrote:
> > >
> > >+1 (binding)
> > >Thanks for the KIP!
> > >
> > >On Thu, Dec 5, 2019 at 12:56 AM Ryanne Dolan  >
> > > wrote:
> > >>
> > >> Bump. We've got 2 non-binding votes so far.
> > >>
> > >> On Wed, Nov 13, 2019 at 3:32 PM Ning Zhang <
> ning2008w...@gmail.com
> > >
> > > wrote:
> > >>
> > >> > My current plan is to implement this in "MirrorCheckpointTask"
> > >> >
> > >> > On 2019/11/02 03:30:11, Xu Jianhai 
> wrote:
> > >> > > I think this kip will implement a task in sinkTask ? right?
> > >> > >
> > >> > > On Sat, Nov 2, 2019 at 1:06 AM Ryanne Dolan <
> > > ryannedo...@gmail.com>
> > >> > wrote:
> > >> > >
> > >> > > > Hey y'all, Ning Zhang and I would like to start the vote 
for
> > > the
> > >> > following
> > >> > > > small KIP:
> > >> > > >
> > >> > > >
> > >> > > >
> > >> >
> > >
> >
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
> > >> > > >
> > >> > > > This is an elegant way to automatically write consumer 
group
> > > offsets to
> > >> > > > downstream clusters without breaking existing use cases.
> > > Currently, we
> > >> > rely
> > >> > > > on external tooling based on RemoteClusterUtils and
> > >> > kafka-consumer-groups
> > >> > > > command to write offsets. This KIP bakes this functionality
> > > into MM2
> > >> > > > itself, reducing the effort required to failover/failback
> > > workloads
> > >> > between
> > >> > > > clusters.
> > >> > > >
> > >> > > > Thanks for the votes!
> > >> > > >
> > >> > > > Ryanne
> > >> > > >
> > >> > >
> > >> >
> > >
> > >
> > >
> >




[jira] [Created] (KAFKA-9516) Flaky Test PlaintextProducerSendTest#testNonBlockingProducer

2020-02-06 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-9516:
--

 Summary: Flaky Test 
PlaintextProducerSendTest#testNonBlockingProducer
 Key: KAFKA-9516
 URL: https://issues.apache.org/jira/browse/KAFKA-9516
 Project: Kafka
  Issue Type: Bug
  Components: core, producer , tools, unit tests
Reporter: Matthias J. Sax


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4521/testReport/junit/kafka.api/PlaintextProducerSendTest/testNonBlockingProducer/]
{quote}java.util.concurrent.TimeoutException: Timeout after waiting for 1 
ms. at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:78)
 at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
 at 
kafka.api.PlaintextProducerSendTest.verifySendSuccess$1(PlaintextProducerSendTest.scala:148)
 at 
kafka.api.PlaintextProducerSendTest.testNonBlockingProducer(PlaintextProducerSendTest.scala:172){quote}
{quote}
h3. Standard Output
[2020-02-06 03:35:27,912] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:35:50,812] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:35:51,015] ERROR 
[ReplicaManager broker=0] Error processing append operation on partition 
topic-0 (kafka.server.ReplicaManager:76) 
org.apache.kafka.common.errors.InvalidTimestampException: One or more records 
have been rejected due to invalid timestamp [2020-02-06 03:35:51,027] ERROR 
[ReplicaManager broker=0] Error processing append operation on partition 
topic-0 (kafka.server.ReplicaManager:76) 
org.apache.kafka.common.errors.InvalidTimestampException: One or more records 
have been rejected due to invalid timestamp [2020-02-06 03:35:53,127] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:35:58,617] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:36:01,843] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:36:05,111] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:36:08,383] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:36:08,383] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:36:12,582] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:36:12,582] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:36:15,902] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-02-06 03:36:19,111] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-pa

[jira] [Reopened] (KAFKA-8940) Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance

2020-02-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-8940:


Reopening. Failed again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4521/testReport/junit/org.apache.kafka.streams.integration/SmokeTestDriverIntegrationTest/shouldWorkWithRebalance/]
{quote}java.lang.AssertionError: verifying tagg fail: key=1000 
tagg=[ConsumerRecord(topic = tagg, partition = 0, leaderEpoch = 0, offset = 57, 
CreateTime = 1580964384634, serialized key size = 4, serialized value size = 8, 
headers = RecordHeaders(headers = [], isReadOnly = false), key = 1000, value = 
9)] expected=10 taggEvents: [ConsumerRecord(topic = tagg, partition = 0, 
leaderEpoch = 0, offset = 57, CreateTime = 1580964384634, serialized key size = 
4, serialized value size = 8, headers = RecordHeaders(headers = [], isReadOnly 
= false), key = 1000, value = 9)] verifying suppressed min-suppressed verifying 
min-suppressed with 10 keys verifying suppressed sws-suppressed verifying min 
with 10 keys verifying max with 10 keys verifying dif with 10 keys verifying 
sum with 10 keys sum fail: key=9-1008 actual=507814 expected=508500{quote}

> Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance
> -
>
> Key: KAFKA-8940
> URL: https://issues.apache.org/jira/browse/KAFKA-8940
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.5.0
>
>
> I lost the screen shot unfortunately... it reports the set of expected 
> records does not match the received records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-545 support automated consumer offset sync across clusters in MM 2.0

2020-02-06 Thread Edoardo Comar
+1 (non-binding)
thanks for the KIP !

On Tue, 14 Jan 2020 at 13:57, Navinder Brar 
wrote:

> +1 (non-binding)
> Navinder
> On Tuesday, 14 January, 2020, 07:24:02 pm IST, Ryanne Dolan <
> ryannedo...@gmail.com> wrote:
>
>  Bump. We've got 4 non-binding and one binding vote.
>
> Ryanne
>
> On Fri, Dec 13, 2019, 1:44 AM Tom Bentley  wrote:
>
> > +1 (non-binding)
> >
> > On Thu, Dec 12, 2019 at 6:33 PM Andrew Schofield <
> > andrew_schofi...@live.com>
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > On 12/12/2019, 14:20, "Mickael Maison" 
> > wrote:
> > >
> > >+1 (binding)
> > >Thanks for the KIP!
> > >
> > >On Thu, Dec 5, 2019 at 12:56 AM Ryanne Dolan  >
> > > wrote:
> > >>
> > >> Bump. We've got 2 non-binding votes so far.
> > >>
> > >> On Wed, Nov 13, 2019 at 3:32 PM Ning Zhang <
> ning2008w...@gmail.com
> > >
> > > wrote:
> > >>
> > >> > My current plan is to implement this in "MirrorCheckpointTask"
> > >> >
> > >> > On 2019/11/02 03:30:11, Xu Jianhai 
> wrote:
> > >> > > I think this kip will implement a task in sinkTask ? right?
> > >> > >
> > >> > > On Sat, Nov 2, 2019 at 1:06 AM Ryanne Dolan <
> > > ryannedo...@gmail.com>
> > >> > wrote:
> > >> > >
> > >> > > > Hey y'all, Ning Zhang and I would like to start the vote for
> > > the
> > >> > following
> > >> > > > small KIP:
> > >> > > >
> > >> > > >
> > >> > > >
> > >> >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
> > >> > > >
> > >> > > > This is an elegant way to automatically write consumer group
> > > offsets to
> > >> > > > downstream clusters without breaking existing use cases.
> > > Currently, we
> > >> > rely
> > >> > > > on external tooling based on RemoteClusterUtils and
> > >> > kafka-consumer-groups
> > >> > > > command to write offsets. This KIP bakes this functionality
> > > into MM2
> > >> > > > itself, reducing the effort required to failover/failback
> > > workloads
> > >> > between
> > >> > > > clusters.
> > >> > > >
> > >> > > > Thanks for the votes!
> > >> > > >
> > >> > > > Ryanne
> > >> > > >
> > >> > >
> > >> >
> > >
> > >
> > >
> >


Re: [VOTE] KIP-518: Allow listing consumer groups per state

2020-02-06 Thread Mickael Maison
Hi Manikumar,

I believe I've answered David's comments in the DISCUSS thread.
Thanks

On Wed, Jan 15, 2020 at 10:15 AM Manikumar  wrote:
>
> Hi Mickael,
>
> Thanks for the KIP.  Can you respond to the comments from David on discuss
> thread?
>
> Thanks,


Re: [VOTE] KIP-409: Allow creating under-replicated topics and partitions

2020-02-06 Thread Mickael Maison
I have not seen any new feedback nor votes.
Bumping this thread again

On Mon, Jan 27, 2020 at 3:55 PM Mickael Maison  wrote:
>
> Hi,
>
> We are now at 4 non-binding votes but still no binding votes.
> I have not seen any outstanding questions in the DISCUSS thread. If
> you have any feedback, please let me know.
>
> Thanks
>
>
> On Thu, Jan 16, 2020 at 2:03 PM M. Manna  wrote:
> >
> > MIckael,
> >
> >
> >
> > On Thu, 16 Jan 2020 at 14:01, Mickael Maison 
> > wrote:
> >
> > > Hi Manna,
> > >
> > > In your example, the topic 'dummy' is not under replicated. It just
> > > has 1 replica. A topic under replicated is a topic with less ISRs than
> > > replicas.
> > >
> > > Having under replicated topics is relatively common in a Kafka
> > > cluster, it happens everytime is broker is down. However Kafka does
> > > not permit it to happen at topic creation. Currently at creation,
> > > Kafka requires to have at least as many brokers as the replication
> > > factor. This KIP addresses this limitation.
> > >
> > > Regarding your 2nd point. When rack awareness is enabled, Kafka tries
> > > to distribute partitions across racks. When all brokers in a rack are
> > > down (ie: a zone is offline), you can end up with partitions not well
> > > distributed even with rack awareness. There are currently no easy way
> > > to track such partitions so I decided to not attempt addressing this
> > > issue in this KIP.
> > >
> > > I hope that answers your questions.
> > >
> >
> >  It does and I appreciate you taking time and explaining this.
> >
> >  +1 (binding) if I haven't already.
> >
> > >
> > >
> > >
> > > On Wed, Jan 15, 2020 at 4:10 PM Kamal Chandraprakash
> > >  wrote:
> > > >
> > > > +1 (non-binding). Thanks for the KIP!
> > > >
> > > > On Mon, Jan 13, 2020 at 1:58 PM M. Manna  wrote:
> > > >
> > > > > Hi Mikael,
> > > > >
> > > > > Apologies for last minute question, as I just caught up with it.
> > > Thanks for
> > > > > your work on the KIP.
> > > > >
> > > > > Just trying to get your thoughts on one thing (I might have
> > > misunderstood
> > > > > it) - currently it's possible (even though I am strongly against it) 
> > > > > to
> > > > > create Kafka topics which are under-replicated; despite all brokers
> > > being
> > > > > online. This the the output of an intentionally under-replicated topic
> > > > > "dummy" with p=6 and RF=1 (with a 3 node cluster)
> > > > >
> > > > >
> > > > > virtualadmin@kafka-broker-machine-1:/opt/kafka/bin$ ./kafka-topics.sh
> > > > > --create --topic dummy --partitions 6 --replication-factor 1
> > > > > --bootstrap-server localhost:9092
> > > > > virtualadmin@kafka-broker-machine-1:/opt/kafka/bin$ ./kafka-topics.sh
> > > > > --describe --topic dummy  --bootstrap-server localhost:9092
> > > > > Topic:dummy PartitionCount:6ReplicationFactor:1
> > > > >
> > > > >
> > > Configs:compression.type=gzip,min.insync.replicas=2,cleanup.policy=delete,segment.bytes=10485760,max.message.bytes=10642642,retention.bytes=20971520
> > > > > Topic: dummyPartition: 0Leader: 3   Replicas: 3
> > > > > Isr: 3
> > > > > Topic: dummyPartition: 1Leader: 1   Replicas: 1
> > > > > Isr: 1
> > > > > Topic: dummyPartition: 2Leader: 2   Replicas: 2
> > > > > Isr: 2
> > > > > Topic: dummyPartition: 3Leader: 3   Replicas: 3
> > > > > Isr: 3
> > > > > Topic: dummyPartition: 4Leader: 1   Replicas: 1
> > > > > Isr: 1
> > > > > Topic: dummyPartition: 5Leader: 2   Replicas: 2
> > > > > Isr: 2
> > > > >
> > > > >  This is with respect to the following statement on your KIP (i.e.
> > > > > under-replicated topic creation is also permitted when none is
> > > offline):
> > > > >
> > > > > *but note that this may already happen (without this KIP) when
> > > > > > topics/partitions are created while all brokers in a rack are 
> > > > > > offline
> > > > > (ie:
> > > > > > an availability zone is offline). Tracking topics/partitions not
> > > > > optimally
> > > > > > spread across all racks can be tackled in a follow up KIP.  *
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Did you mean to say that such under-replicated topics (including
> > > > > human-created ones) will be handled in a separete KIP ?
> > > > >
> > > > > Regards,
> > > > >
> > > > >
> > > > > On Mon, 13 Jan 2020 at 10:15, Mickael Maison  > > >
> > > > > wrote:
> > > > >
> > > > > > Hi all.
> > > > > >
> > > > > > With 2.5.0 approaching, bumping this thread once more as feedback or
> > > > > > votes would be nice.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > On Wed, Dec 18, 2019 at 1:59 PM Tom Bentley 
> > > wrote:
> > > > > > >
> > > > > > > +1 non-binding. Thanks!
> > > > > > >
> > > > > > > On Wed, Dec 18, 2019 at 1:05 PM Sönke Liebau
> > > > > > >  wrote:
> > > > > > >
> > > > > > > > Hi Mickael,
> > > > > > > >
> > > > > > > > thanks for your response! That all makes perfect sense and I
> > > cannot
> > > > > > > > gi

[jira] [Created] (KAFKA-9515) Upgrade ZooKeeper to 3.5.7

2020-02-06 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-9515:
--

 Summary: Upgrade ZooKeeper to 3.5.7
 Key: KAFKA-9515
 URL: https://issues.apache.org/jira/browse/KAFKA-9515
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
 Fix For: 2.5.0, 2.4.1


There are some critical fixes in ZK 3.5.7 and the first RC has been posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Mistake in official documentation ?

2020-02-06 Thread Fares Oueslati
Thanks for your quick feedback.

Here is a link to the hosted screenshot https://we.tl/t-TDiRc7jOmV

Regards,
Fares



On Thu, Feb 6, 2020 at 5:16 PM Adam Bellemare 
wrote:

> Screenshot didn't arrive for me - you may need to host it in an image site
> (ie: imgur), not sure if the mailing list will allow it to be attached.
>
> It could very well be a mistake. highlight it and send it to
> dev@kafka.apache.org and if it is, we'll make a ticket and address it.
>
> On Thu, Feb 6, 2020 at 10:53 AM Fares Oueslati 
> wrote:
>
> > Hello,
> >
> > While going through the official docs
> > https://kafka.apache.org/documentation/#messageformat
> >
> > If I'm not wrong, I believe there is a mismatch between description of a
> > segment and the diagram illustrating the concept.
> >
> > I pointed out the issue in the attached screenshot.
> >
> > Didn't really know where to ask about this, so sorry if it's not the
> > appropriate place.
> >
> > Regards,
> > Fares
> >
>


Re: Mistake in official documentation ?

2020-02-06 Thread Adam Bellemare
Screenshot didn't arrive for me - you may need to host it in an image site
(ie: imgur), not sure if the mailing list will allow it to be attached.

It could very well be a mistake. highlight it and send it to
dev@kafka.apache.org and if it is, we'll make a ticket and address it.

On Thu, Feb 6, 2020 at 10:53 AM Fares Oueslati 
wrote:

> Hello,
>
> While going through the official docs
> https://kafka.apache.org/documentation/#messageformat
>
> If I'm not wrong, I believe there is a mismatch between description of a
> segment and the diagram illustrating the concept.
>
> I pointed out the issue in the attached screenshot.
>
> Didn't really know where to ask about this, so sorry if it's not the
> appropriate place.
>
> Regards,
> Fares
>


Re: [DISCUSS] KIP-552: Add interface to handle unused config

2020-02-06 Thread Gwen Shapira
INFO is the default log level, and while it looks less "alarming" than WARN, 
users will still see it and in my experience, they will worry that something is 
wrong anyway.  Or if INFO isn't the default, users won't see it, so it is no 
different from debug and we are left with no way of warning users that they 
misconfigured something.

The point is that "known configs" exist in Kafka as a validation step. It is 
there to protect users. So anything that makes the concerns about unknown 
configs invisible to users, makes the validation step useless and we may as 
well remove it. I'm against that - I think users should be made aware of 
misconfigs as much as possible - especially since if you misspell "retention", 
you will lose data.

If we look away from the symptom and go back to the actual cause  

I think Kafka had a way (and maybe it still does) for 3rd party developers who 
create client plugins (mostly interceptors) to make their configs "known". 3rd 
party developers should be responsible for the good experience of their users.  
Now it is possible that you'll pick a 3rd party library that didn't do it and 
have a worse user experience, but I am not sure it is the job of Apache Kafka 
to protect users from their choice of libraries (and as long as those libraries 
are OSS, users can fix them). Especially not at the expense of someone who 
doesn't use 3rd party libs.

Gwen

Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog

On Thu, Feb 06, 2020 at 2:06 AM, Artur Burtsev < artj...@gmail.com > wrote:

> 
> 
> 
> Hi John,
> 
> 
> 
> In out case it wont help, since we are running instance per partition and
> even with summary only we get 32 warnings per rollout.
> 
> 
> 
> Hi Gwen,
> 
> 
> 
> Thanks for you reply, I understand and share your concern, I also
> mentioned it earlier in the thread. Do you think it will work if we change
> DEBUG to INFO?
> 
> 
> 
> Thanks,
> Artur
> 
> 
> 
> On Thu, Feb 6, 2020 at 4:21 AM Gwen Shapira < gwen@ confluent. io (
> g...@confluent.io ) > wrote:
> 
> 
>> 
>> 
>> Sorry for late response. The reason that unused configs is in WARN is that
>> if you misspell a config, it means that it will not apply. In some cases
>> (default retention) you want know until too late. We wanted to warn admins
>> about possible misconfigurations.
>> 
>> 
>> 
>> In the context of a company supporting Kafka - customers run logs at INFO
>> level normally, so if we suspect a misconfiguration, we don't want to ask
>> the customer to change level to DEBUG and bounce the broker. It is time
>> consuming and can be risky.
>> 
>> 
>> 
>> *Gwen Shapira*
>> Product Manager | Confluent
>> 650.450.2760 | @gwenshap
>> Follow us: Twitter ( https:/ / twitter. com/ ConfluentInc (
>> https://twitter.com/ConfluentInc ) ) | blog ( http:/ / www. confluent. io/
>> blog ( http://www.confluent.io/blog ) )
>> 
>> 
>> 
>> Sent via Superhuman ( https:/ / sprh. mn/ ?vip=gwen@ confluent. io (
>> https://sprh.mn/?vip=g...@confluent.io ) )
>> 
>> 
>> 
>> On Mon, Jan 06, 2020 at 4:21 AM, Stanislav Kozlovski < stanislav@ confluent.
>> io ( stanis...@confluent.io ) > wrote:
>> 
>> 
>>> 
>>> 
>>> Hey Artur,
>>> 
>>> 
>>> 
>>> Perhaps changing the log level to DEBUG is the simplest approach.
>>> 
>>> 
>>> 
>>> I wonder if other people know what the motivation behind the WARN log was?
>>> I'm struggling to think up of a scenario where I'd like to see unused
>>> values printed in anything above DEBUG.
>>> 
>>> 
>>> 
>>> Best,
>>> Stanislav
>>> 
>>> 
>>> 
>>> On Mon, Dec 30, 2019 at 12:52 PM Artur Burtsev < artjock@ gmail. com ( 
>>> artjock@
>>> gmail. com ( artj...@gmail.com ) ) > wrote:
>>> 
>>> 
 
 
 Hi,
 
 
 
 Indeed changing the log level for the whole AbstractConfig is not an
 option, because logAll is extremely useful.
 
 
 
 Grouping warnings into 1 (with the count of unused only) will not be a
 good option for us either. It will still be pretty noisy. Imagine we have
 32 partitions and scaled up the application to 32 instances then we still
 have 32 warnings per application (instead of 96 now) while we would like
 to have 0 warnings because we are perfectly aware of using
 schema.registry.url and its totally fine, and we don't have to be warned
 every time we start the application. Now imagine we use more than one
 consumer per application, then it will add another multiplication factor
 to these grouped warnings and we still have a lot of those. So I would say
 grouping doesn't help much.
 
 
 
 I think adding extra logger like
 "org.apache.kafka.clients.producer.ProducerConfig.unused" could be another
 good option. That would leave the existing interface untouched and give
 everyone an option to mute irrelevant warnings.
 
 
 
 To summarize, I still can see 3 options with its pros and cons discussed
 in the thread:
 1) extra config w

Mistake in official documentation ?

2020-02-06 Thread Fares Oueslati
Hello,

While going through the official docs
https://kafka.apache.org/documentation/#messageformat

If I'm not wrong, I believe there is a mismatch between description of a
segment and the diagram illustrating the concept.

I pointed out the issue in the attached screenshot.

Didn't really know where to ask about this, so sorry if it's not the
appropriate place.

Regards,
Fares


Build failed in Jenkins: kafka-trunk-jdk8 #4212

2020-02-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8164: Add support for retrying failed (#8019)

[github] KAFKA-9447: Add new customized EOS model example (#8031)


--
[...truncated 2.38 MB...]
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldAddAvgAndMinAndMaxMetricsToSensor STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldAddAvgAndMinAndMaxMetricsToSensor PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldReturnMetricsVersionCurrent STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldReturnMetricsVersionCurrent PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldReturnMetricsVersionFrom100To23 STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldReturnMetricsVersionFrom100To23 PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldMeasureLatency STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldMeasureLatency PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldNotMeasureLatencyDueToRecordingLevel STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldNotMeasureLatencyDueToRecordingLevel PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldNotMeasureLatencyBecauseSensorHasNoMetrics STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldNotMeasureLatencyBecauseSensorHasNoMetrics PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetExistingTaskLevelSensor STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetExistingTaskLevelSensor PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetNewStoreLevelSensor STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetNewStoreLevelSensor PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetExistingStoreLevelSensor STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetExistingStoreLevelSensor PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetNewNodeLevelSensor STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetNewNodeLevelSensor PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetExistingNodeLevelSensor STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetExistingNodeLevelSensor PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetNewCacheLevelSensor STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetNewCacheLevelSensor PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetExistingCacheLevelSensor STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldGetExistingCacheLevelSensor PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldAddClientLevelImmutableMetric STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldAddClientLevelImmutableMetric PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldAddClientLevelMutableMetric STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldAddClientLevelMutableMetric PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldProvideCorrectStrings STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldProvideCorrectStrings PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldRemoveClientLevelMetrics STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldRemoveClientLevelMetrics PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldRemoveThreadLevelSensors STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
shouldRemoveThreadLevelSensors PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testNullMetrics STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testNullMetrics PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testRemoveNullSensor STARTED

org.apache.kafka.streams

Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-02-06 Thread Navinder Brar
Hi,

While implementing 562, we have decided to rename StoreQueryParams -> 
StoreQueryParameters. I have updated the PR and confluence. Please share if 
anyone has feedback on it.

Thanks & Regards,
Navinder Pal Singh Brar 

On Friday, 24 January, 2020, 08:45:15 am IST, Navinder Brar 
 wrote:  
 
 Hi John,

Thanks for the responses. I will make the below changes as I had suggested 
earlier, and then close the vote in a few hours.

includeStaleStores -> staleStores
withIncludeStaleStores() > enableStaleStores()
includeStaleStores() -> staleStoresEnabled()

Thanks,
Navinder

Sent from Yahoo Mail for iPhone


On Friday, January 24, 2020, 5:36 AM, John Roesler  wrote:

Hi Bruno,

Thanks for your question; it's a very reasonable response to 
what I said before.

I didn't mean "field" as in an instance variable, just as in a specific
property or attribute. It's hard to talk about because all the words
for this abstract concept are also words that people commonly use
for instance variables.

The principle is that these classes should be simple data containers.
It's not so much that the methods match the field name, or that the
field name matches the methods, but that all three bear a simple
and direct relationship to each other. Or maybe I should say that
the getters, setters, and fields are all directly named after a property.

The point is that we should make sure we're not "playing games" in
these classes but simply setting a property and offering a transparent
way to get exactly what you just set.

I actually do think that the instance variable itself should have the
same name as the "field" or "property" that the getters and setters
are named for. This is not a violation of encapsulation because those 
instance variables are required to be private. 

 I guess you can think of  this rule as more of a style guide than a 
grammar, but whatever. As a maintainer, I think we should discourage 
these particular classes to have different instance variables than 
method names. Otherwise,  it's just silly. Either "includeStaleStores" 
or "staleStoresEnabled" is a fine name, but not both. There's no 
reason at all to name all the accessors one of them and the instance 
variable they access the  other name.

Thanks,
-John


On Thu, Jan 23, 2020, at 17:27, Bruno Cadonna wrote:
> Hi John,
> 
> One question: Why must the field name be involved in the naming? It
> somehow contradicts encapsulation. Field name `includeStaleStores` (or
> `staleStoresEnabled`) sounds perfectly fine to me. IMO, we should
> decouple the parameter name from the actual field name.
> 
> Bruno
> 
> On Thu, Jan 23, 2020 at 3:02 PM John Roesler  wrote:
> >
> > Hi all,
> >
> > Thanks for the discussion!
> >
> > The basic idea I used in the original draft of the grammar was to avoid
> > "fancy code" and just write "normal java". That's why I favored java bean
> > spec over Kafka code traditions.
> >
> > According to the bean spec, setters always start with "set" and getters
> > always start with "get" (or "is" for booleans). This often yields absurd
> > or awkward readability. On the other hand, the "kafka" idioms
> > seems to descend from Scala idiom of naming getters and setters
> > exactly the same as the field they get and set. This plays to a language
> > feature of Scala (getter referential transparency) that is not present
> > in Java. My feeling is that we probably keep this idiom around
> > precisely to avoid the absurd phrasing that the bean spec leads to.
> >
> > On the other hand, adopting the Kafka/Scala idiom brings in an
> > additional burden I was trying to avoid: you have to pick a good
> > name. Basically I was trying to avoid exactly this conversation ;)
> > i.e., "X sounds weird, how about Y", "well, Y also sounds weird,
> > what about Z", "Z sounds good, but then the setter sounds weird",
> > etc.
> >
> > Maybe avoiding discussion was too ambitious, and I can't deny that
> > bean spec names probably result in no one being happy, so I'm on
> > board with the current proposal:
> >
> > setters:
> > set{FieldName}(value)
> > {enable/disable}{FieldName}()
> >
> > getters:
> > {fieldName}()
> > {fieldName}{Enabled/Disabled}()
> >
> > Probably, we'll find cases that are silly under that formula too,
> > but we'll cross that bridge when we get to it.
> >
> > I'll update the grammar when I get the chance.
> >
> > Thanks!
> > -John
> >
> > On Thu, Jan 23, 2020, at 12:37, Navinder Brar wrote:
> > > Thanks Bruno, for the comments.
> > > 1) Fixed.
> > >
> > > 2) I would be okay to call the variable staleStores. Since anyways we
> > > are not using constructor, so the only way the variable is exposed
> > > outside is the getter and the optional builder method. With this
> > > variable name, we can name the builder method as "enableStaleStores"
> > > and I feel staleStoresEnabled() is more readable for getter function.
> > > So, we can also change the grammar for getters for boolean variables to
> > > {FlagName}enabled / {FlagName}disabled. WD

Re: [DISCUSS] KIP-552: Add interface to handle unused config

2020-02-06 Thread Artur Burtsev
Hi John,

In out case it wont help, since we are running instance per partition
and even with summary only we get 32 warnings per rollout.

Hi Gwen,

Thanks for you reply, I understand and share your concern, I also
mentioned it earlier in the thread. Do you think it will work if we
change DEBUG to INFO?

Thanks,
Artur

On Thu, Feb 6, 2020 at 4:21 AM Gwen Shapira  wrote:
>
> Sorry for late response. The reason that unused configs is in WARN is that if 
> you misspell a config, it means that it will not apply. In some cases 
> (default retention) you want know until too late. We wanted to warn admins 
> about possible misconfigurations.
>
> In the context of a company supporting Kafka - customers run logs at INFO 
> level normally, so if we suspect a misconfiguration, we don't want to ask the 
> customer to change level to DEBUG and bounce the broker. It is time consuming 
> and can be risky.
>
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter ( https://twitter.com/ConfluentInc ) | blog ( 
> http://www.confluent.io/blog )
>
> Sent via Superhuman ( https://sprh.mn/?vip=g...@confluent.io )
>
> On Mon, Jan 06, 2020 at 4:21 AM, Stanislav Kozlovski < stanis...@confluent.io 
> > wrote:
>
> >
> >
> >
> > Hey Artur,
> >
> >
> >
> > Perhaps changing the log level to DEBUG is the simplest approach.
> >
> >
> >
> > I wonder if other people know what the motivation behind the WARN log was?
> > I'm struggling to think up of a scenario where I'd like to see unused
> > values printed in anything above DEBUG.
> >
> >
> >
> > Best,
> > Stanislav
> >
> >
> >
> > On Mon, Dec 30, 2019 at 12:52 PM Artur Burtsev < artjock@ gmail. com (
> > artj...@gmail.com ) > wrote:
> >
> >
> >>
> >>
> >> Hi,
> >>
> >>
> >>
> >> Indeed changing the log level for the whole AbstractConfig is not an
> >> option, because logAll is extremely useful.
> >>
> >>
> >>
> >> Grouping warnings into 1 (with the count of unused only) will not be a
> >> good option for us either. It will still be pretty noisy. Imagine we have
> >> 32 partitions and scaled up the application to 32 instances then we still
> >> have 32 warnings per application (instead of 96 now) while we would like
> >> to have 0 warnings because we are perfectly aware of using
> >> schema.registry.url and its totally fine, and we don't have to be warned
> >> every time we start the application. Now imagine we use more than one
> >> consumer per application, then it will add another multiplication factor
> >> to these grouped warnings and we still have a lot of those. So I would say
> >> grouping doesn't help much.
> >>
> >>
> >>
> >> I think adding extra logger like
> >> "org.apache.kafka.clients.producer.ProducerConfig.unused" could be another
> >> good option. That would leave the existing interface untouched and give
> >> everyone an option to mute irrelevant warnings.
> >>
> >>
> >>
> >> To summarize, I still can see 3 options with its pros and cons discussed
> >> in the thread:
> >> 1) extra config with interface to handle unused
> >> 2) change unused warn to debug
> >> 3) add extra logger for unused
> >>
> >>
> >>
> >> Please let me know what do you think.
> >>
> >>
> >>
> >> Thanks,
> >> Artur
> >>
> >>
> >>
> >> On Mon, Dec 30, 2019 at 11:07 AM Stanislav Kozlovski
> >> < stanislav@ confluent. io ( stanis...@confluent.io ) > wrote:
> >>
> >>
> >>>
> >>>
> >>> Hi all,
> >>>
> >>>
> >>>
> >>> Would printing all the unused configurations in one line, versus N lines,
> >>> be more helpful? I know that it would greatly reduce the verbosity in log
> >>> visualization tools like Kibana while still allowing us to see all the
> >>> relevant information without the need for an explicit action (e.g changing
> >>> the log level)
> >>>
> >>>
> >>>
> >>> Best,
> >>> Stanislav
> >>>
> >>>
> >>>
> >>> On Sat, Dec 28, 2019 at 3:13 PM John Roesler < vvcephei@ apache. org (
> >>> vvcep...@apache.org ) >
> >>>
> >>>
> >>
> >>
> >>
> >> wrote:
> >>
> >>
> >>>
> 
> 
>  Hi Artur,
> 
> 
> 
>  That’s a good point.
> 
> 
> 
>  One thing you can do is log a summary at WARN level, like “27
>  configurations were ignored. Ignored configurations are logged at DEBUG
>  level.”
> 
> 
> 
>  I looked into the code a little, and these log messages are generated
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> in
> >>
> >>
> >>>
> 
> 
>  AbstractConfig (logAll and logUnused). They both use the logger
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> associated
> >>
> >>
> >>>
> 
> 
>  with the relevant config class (StreamsConfig, ProducerConfig, etc.).
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> The
> >>
> >>
> >>>
> 
> 
>  list of all configs is logged at INFO level, and the list of unused
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> configs
> >>
> >>
> >>>
> 
> 
>  is logged at WARN level. This means that it's not possible to silence
> 
> 
> >>>
> >>>
> >>

[jira] [Created] (KAFKA-9514) The protocol generator generated useless condition when a field is made nullable and flexible version is used

2020-02-06 Thread David Jacot (Jira)
David Jacot created KAFKA-9514:
--

 Summary: The protocol generator generated useless condition when a 
field is made nullable and flexible version is used
 Key: KAFKA-9514
 URL: https://issues.apache.org/jira/browse/KAFKA-9514
 Project: Kafka
  Issue Type: Bug
Reporter: David Jacot
Assignee: David Jacot


The protocol generator generates useless conditions when a field of type string 
is made nullable after the request has been converted to using optional fields.

As an example, we have make the field `ProtocolName` nullable in the 
`JoinGroupResponse`. The `JoinGroupResponse` supports optional fields since 
version 6 and the field is nullable since version 7. Under these conditions, 
the generator generates the following code:

{code:java}
if (protocolName == null) {
 if (_version >= 7) {
   if (_version >= 6) {
 _writable.writeUnsignedVarint(0);
   } else {
 _writable.writeShort((short) -1);
  }
 } else {
   throw new NullPointerException();
 }
}
{code}

spotbugs raises an `UC_USELESS_CONDITION` because `_version >= 6` is always 
true.  

We could optimise the generator to handle this.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)