Re: [VOTE] 2.3.0 RC2

2019-06-16 Thread Vahid Hashemian
+1 (non-binding)

I also verifies signatures, build from source and tested the Quickstart
successfully on the built binary.

BTW, I don't see a link to documentation for 2.3. Is there a reason?

Thanks,
--Vahid

On Sat, Jun 15, 2019 at 6:38 PM Gwen Shapira  wrote:

> +1 (binding)
>
> Verified signatures, built from sources, ran quickstart on binary and
> checked out the passing jenkins build on the branch.
>
> Gwen
>
>
> On Thu, Jun 13, 2019 at 11:58 AM Colin McCabe  wrote:
> >
> > Hi all,
> >
> > Good news: I have run a junit test build for RC2, and it passed.  Check
> out https://builds.apache.org/job/kafka-2.3-jdk8/51/
> >
> > Also, the vote will go until Saturday, June 15th (sorry for the typo
> earlier in the vote end time).
> >
> > best,
> > Colin
> >
> >
> > On Wed, Jun 12, 2019, at 15:55, Colin McCabe wrote:
> > > Hi all,
> > >
> > > We discovered some problems with the first release candidate (RC1) of
> > > 2.3.0.  Specifically, KAFKA-8484 and KAFKA-8500.  I have created a new
> > > release candidate that includes fixes for these issues.
> > >
> > > Check out the release notes for the 2.3.0 release here:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc2/RELEASE_NOTES.html
> > >
> > > The vote will go until Friday, June 7th, or until we create another R
> > >
> > > * Kafka's KEYS file containing PGP keys we use to sign the release can
> > > be found here:
> > > https://kafka.apache.org/KEYS
> > >
> > > * The release artifacts to be voted upon (source and binary) are here:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc2/javadoc/
> > >
> > > * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> > > https://github.com/apache/kafka/releases/tag/2.3.0-rc2
> > >
> > > best,
> > > Colin
> > >
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


-- 

Thanks!
--Vahid


Re: Cant do integration testing

2019-06-16 Thread Dulvin Witharane
Oh shoot. Thanks for pointing it out. Will check that way.

Thanks and best regards,
Dulvin

On Mon, Jun 17, 2019 at 3:13 AM Matthias J. Sax 
wrote:

> Did you run
>
> ./gradle integrationTest
>
> or
>
> ./gradlew integrationTest
>
> ?
>
> The later would be correct. If you don't have the `gradlew` wrapper,
> just run
>
> $ gradle
>
> without any parameters and it will create it. See the README on GiHhub
> for more details: https://github.com/apache/kafka
>
>
> -Matthias
>
>
> On 6/16/19 9:34 AM, Dulvin Witharane wrote:
> > Hi,
> >
> > When I try /gradle integrationTest on a fresh trunk build, it fails. I
> tried this because I was trying to create a pull request, and that was also
> facing build issues.
> >
> > Is there anything I should look before opening the pull request. Working
> on MacOs
> >
> > Regards,
> > Dulvin
> >
>
> --
Witharane, DRH
R & D Engineer
Synopsys Lanka (Pvt) Ltd.
Borella, Sri Lanka
0776746781

Sent from my iPhone


[jira] [Resolved] (KAFKA-8547) 2 __consumer_offsets partitions grow very big

2019-06-16 Thread Kamal Chandraprakash (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamal Chandraprakash resolved KAFKA-8547.
-
Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/KAFKA-8335

PR [https://github.com/apache/kafka/pull/6715]

> 2 __consumer_offsets partitions grow very big
> -
>
> Key: KAFKA-8547
> URL: https://issues.apache.org/jira/browse/KAFKA-8547
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Affects Versions: 2.1.1
> Environment: Ubuntu 18.04, Kafka 2.1.12-2.1.1, running as systemd 
> service
>Reporter: Lerh Chuan Low
>Priority: Major
>
> It seems like log cleaner doesn't clean old data of  {{__consumer_offsets}} 
> on the default policy of compact on that topic. It may eventually cause disk 
> to run out or for the servers to run out of memory.
> We observed a few out of memory errors with our Kafka servers and our theory 
> was due to 2 overly large partitions in {{__consumer_offsets}}. On further 
> digging, it looks like these 2 large partitions have segments dating up to 3 
> months ago. Also, these old files collectively consumed most of the data from 
> those partitions (About 10G from the partition's 12G). 
> When we tried dumping those old segments, we see:
>  
> {code:java}
> 1:40 $ ./kafka-run-class.sh kafka.tools.DumpLogSegments --files 
> 161728257775.log --offsets-decoder --print-data-log --deep-iteration
>  Dumping 161728257775.log
>  Starting offset: 161728257775
>  offset: 161728257904 position: 61 CreateTime: 1553457816168 isvalid: true 
> keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 367038 
> producerEpoch: 3 sequence: -1 isTransactional: true headerKeys: [] 
> endTxnMarker: COMMIT coordinatorEpoch: 746
>  offset: 161728258098 position: 200 CreateTime: 1553457816230 isvalid: true 
> keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 366036 
> producerEpoch: 3 sequence: -1 isTransactional: true headerKeys: [] 
> endTxnMarker: COMMIT coordinatorEpoch: 761
>  ...{code}
> It looks like all those old segments all contain transactional information 
> (As a side note, we did take a while to figure out that for a segment with 
> the control bit set, the key really is {{endTxnMarker}} and the value is 
> {{coordinatorEpoch}}...otherwise in a non-control batch dump it would have 
> value and payload. We were wondering if seeing what those 2 partitions 
> contained in their keys may give us any clues). Our current workaround is 
> based on this post: 
> https://issues.apache.org/jira/browse/KAFKA-3917?focusedCommentId=16816874=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16816874.
>  We set the cleanup policy to both compact,delete and very quickly the 
> partition was down to below 2G. Not sure if this is something log cleaner 
> should be able to handle normally? Interestingly, other partitions also 
> contain transactional information so it's quite curious how 2 specific 
> partitions were not able to be cleaned. 
> There's a related issue here: 
> https://issues.apache.org/jira/browse/KAFKA-3917, just thought it was a 
> little bit outdated/dead so I opened a new one, please feel free to merge!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8547) 2 __consumer_offsets partitions grow very big

2019-06-16 Thread Lerh Chuan Low (JIRA)
Lerh Chuan Low created KAFKA-8547:
-

 Summary: 2 __consumer_offsets partitions grow very big
 Key: KAFKA-8547
 URL: https://issues.apache.org/jira/browse/KAFKA-8547
 Project: Kafka
  Issue Type: Bug
  Components: log cleaner
Affects Versions: 2.1.1
 Environment: Ubuntu 18.04, Kafka 2.1.12-2.1.1, running as systemd 
service
Reporter: Lerh Chuan Low


There's a related issue here: https://issues.apache.org/jira/browse/KAFKA-3917, 
just thought it was a little bit outdated/dead. 

We observed a few out of memory errors with our Kafka servers and our theory 
was due to 2 overly large partitions in `__consumer_offsets`. On further 
digging, it looks like these 2 large partitions have segments dating up to 3 
months ago. Also, these old files collectively consumed most of the data from 
those partitions (About 10G from the partition's 12G). 

When we tried dumping those old segments, we see:

```
11:40 $ ./kafka-run-class.sh kafka.tools.DumpLogSegments --files 
161728257775.log --offsets-decoder --print-data-log --deep-iteration
Dumping 161728257775.log
Starting offset: 161728257775
offset: 161728257904 position: 61 CreateTime: 1553457816168 isvalid: true 
keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 367038 
producerEpoch: 3 sequence: -1 isTransactional: true headerKeys: [] 
endTxnMarker: COMMIT coordinatorEpoch: 746
offset: 161728258098 position: 200 CreateTime: 1553457816230 isvalid: true 
keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 366036 
producerEpoch: 3 sequence: -1 isTransactional: true headerKeys: [] 
endTxnMarker: COMMIT coordinatorEpoch: 761
...
```

It looks like all those old segments all contain transactional information, and 
the 2 partitions are 1 for the control message COMMIT, the other for the 
control message ABORT. (As a side note, we did take a while to figure out that 
for a segment with the control bit set, the key really is `endTxnMarker` and 
the value is `coordinatorEpoch`...otherwise in a non-control batch dump it 
would have value and payload. We were wondering if seeing what those 2 
partitions contained in their keys may give us any clues). Our current 
workaround is based on this post: 
https://issues.apache.org/jira/browse/KAFKA-3917?focusedCommentId=16816874=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16816874.
 We set the cleanup policy to both compact,delete and very quickly the 
partition was down to below 2G. Not sure if this is something log cleaner 
should be able to handle normally?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8546) Call System#runFinalization to avoid memory leak caused by JDK-6293787

2019-06-16 Thread Badai Aqrandista (JIRA)
Badai Aqrandista created KAFKA-8546:
---

 Summary: Call System#runFinalization to avoid memory leak caused 
by JDK-6293787
 Key: KAFKA-8546
 URL: https://issues.apache.org/jira/browse/KAFKA-8546
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.1
Reporter: Badai Aqrandista


When a heavily used broker uses gzip compression on all topics, sometime you 
can hit GC pauses greater than zookeeper.session.timeout.ms of 6000ms. This is 
caused by memory leak caused by JDK-6293787 
([https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6293787]), which is 
caused by JDK-4797189 
([https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4797189]).

 

In summary, this is what happen:
 * Inflater class contains finalizer method.
 * Whenever a class with finalizer method is instantiated, a Finalizer object 
is created. 
 * GC finalizer thread is responsible to process all Finalizer objects.
 * If the rate of Finalizer object creation exceed the rate of GC finalizer 
thread ability to process it, Finalizer object number grows continuously, and 
eventually triggers full GC (because it is stored in Old Gen).

 

Following stack trace shows what happen when a process is frozen doing full GC:

 
{code:java}
kafka-request-handler-13  Runnable Thread ID: 79
  java.util.zip.Inflater.inflateBytes(long, byte[], int, int) Inflater.java
  java.util.zip.Inflater.inflate(byte[], int, int) Inflater.java:259
  java.util.zip.InflaterInputStream.read(byte[], int, int) 
InflaterInputStream.java:152
  java.util.zip.GZIPInputStream.read(byte[], int, int) GZIPInputStream.java:117
  java.io.BufferedInputStream.fill() BufferedInputStream.java:246
  java.io.BufferedInputStream.read() BufferedInputStream.java:265
  java.io.DataInputStream.readByte() DataInputStream.java:265
  org.apache.kafka.common.utils.ByteUtils.readVarint(DataInput) 
ByteUtils.java:168
  org.apache.kafka.common.record.DefaultRecord.readFrom(DataInput, long, long, 
int, Long) DefaultRecord.java:292
  org.apache.kafka.common.record.DefaultRecordBatch$1.readNext(long, long, int, 
Long) DefaultRecordBatch.java:264
  org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next() 
DefaultRecordBatch.java:563
  org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next() 
DefaultRecordBatch.java:532
  org.apache.kafka.common.record.DefaultRecordBatch.iterator() 
DefaultRecordBatch.java:327
  scala.collection.convert.Wrappers$JIterableWrapper.iterator() 
Wrappers.scala:54
  scala.collection.IterableLike$class.foreach(IterableLike, Function1) 
IterableLike.scala:72
 scala.collection.AbstractIterable.foreach(Function1) Iterable.scala:54
  
kafka.log.LogValidator$$anonfun$validateMessagesAndAssignOffsetsCompressed$1.apply(MutableRecordBatch)
 LogValidator.scala:267
  
kafka.log.LogValidator$$anonfun$validateMessagesAndAssignOffsetsCompressed$1.apply(Object)
 LogValidator.scala:259
  scala.collection.Iterator$class.foreach(Iterator, Function1) 
Iterator.scala:891
  scala.collection.AbstractIterator.foreach(Function1) Iterator.scala:1334
  scala.collection.IterableLike$class.foreach(IterableLike, Function1) 
IterableLike.scala:72
  scala.collection.AbstractIterable.foreach(Function1) Iterable.scala:54
  
kafka.log.LogValidator$.validateMessagesAndAssignOffsetsCompressed(MemoryRecords,
 LongRef, Time, long, CompressionCodec, CompressionCodec, boolean, byte, 
TimestampType, long, int, boolean) LogValidator.scala:259
  kafka.log.LogValidator$.validateMessagesAndAssignOffsets(MemoryRecords, 
LongRef, Time, long, CompressionCodec, CompressionCodec, boolean, byte, 
TimestampType, long, int, boolean) LogValidator.scala:70
  kafka.log.Log$$anonfun$append$2.liftedTree1$1(LogAppendInfo, ObjectRef, 
LongRef, long) Log.scala:771
  kafka.log.Log$$anonfun$append$2.apply() Log.scala:770
  kafka.log.Log$$anonfun$append$2.apply() Log.scala:752
  kafka.log.Log.maybeHandleIOException(Function0, Function0) Log.scala:1842
  kafka.log.Log.append(MemoryRecords, boolean, boolean, int) Log.scala:752
  kafka.log.Log.appendAsLeader(MemoryRecords, int, boolean) Log.scala:722
  kafka.cluster.Partition$$anonfun$13.apply() Partition.scala:660
  kafka.cluster.Partition$$anonfun$13.apply() Partition.scala:648
  kafka.utils.CoreUtils$.inLock(Lock, Function0) CoreUtils.scala:251
  kafka.utils.CoreUtils$.inReadLock(ReadWriteLock, Function0) 
CoreUtils.scala:257
  kafka.cluster.Partition.appendRecordsToLeader(MemoryRecords, boolean, int) 
Partition.scala:647
  kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(Tuple2) 
ReplicaManager.scala:745
  kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(Object) 
ReplicaManager.scala:733
  scala.collection.TraversableLike$$anonfun$map$1.apply(Object) 
TraversableLike.scala:234
  scala.collection.TraversableLike$$anonfun$map$1.apply(Object) 
TraversableLike.scala:234
  

Re: Request to be given permission in the WIKI to create KIPs

2019-06-16 Thread Cem ÖZEN
Thanks!

Regards,
Cem Ozen


On Sun, Jun 16, 2019 at 10:38 PM Matthias J. Sax 
wrote:

> Done.
>
> On 6/16/19 2:24 PM, Cem ÖZEN wrote:
> > WIKI ID is cemozen.
> >
> > Regards,
> > Cem Ozen
> >
> >
> > On Sun, Jun 16, 2019 at 10:17 PM Cem ÖZEN  wrote:
> >
> >> Hello,
> >>
> >> I am new to Kafka development and I am currently working KAFKA-8455
> >> . Since it is an
> >> external facing change, it needs a KIP AFAIK.
> >>
> >> Can someone give me necessary permission in the WIKI to create KIPs,
> >> please?
> >>
> >> Thank you in advance.
> >>
> >> Regards,
> >> Cem Ozen
> >>
> >
>
>


Re: Cant do integration testing

2019-06-16 Thread Matthias J. Sax
Did you run

./gradle integrationTest

or

./gradlew integrationTest

?

The later would be correct. If you don't have the `gradlew` wrapper,
just run

$ gradle

without any parameters and it will create it. See the README on GiHhub
for more details: https://github.com/apache/kafka


-Matthias


On 6/16/19 9:34 AM, Dulvin Witharane wrote:
> Hi,
> 
> When I try /gradle integrationTest on a fresh trunk build, it fails. I tried 
> this because I was trying to create a pull request, and that was also facing 
> build issues.
> 
> Is there anything I should look before opening the pull request. Working on 
> MacOs
> 
> Regards,
> Dulvin
> 



signature.asc
Description: OpenPGP digital signature


Re: Request to be given permission in the WIKI to create KIPs

2019-06-16 Thread Matthias J. Sax
Done.

On 6/16/19 2:24 PM, Cem ÖZEN wrote:
> WIKI ID is cemozen.
> 
> Regards,
> Cem Ozen
> 
> 
> On Sun, Jun 16, 2019 at 10:17 PM Cem ÖZEN  wrote:
> 
>> Hello,
>>
>> I am new to Kafka development and I am currently working KAFKA-8455
>> . Since it is an
>> external facing change, it needs a KIP AFAIK.
>>
>> Can someone give me necessary permission in the WIKI to create KIPs,
>> please?
>>
>> Thank you in advance.
>>
>> Regards,
>> Cem Ozen
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Request to be given permission in the WIKI to create KIPs

2019-06-16 Thread Cem ÖZEN
WIKI ID is cemozen.

Regards,
Cem Ozen


On Sun, Jun 16, 2019 at 10:17 PM Cem ÖZEN  wrote:

> Hello,
>
> I am new to Kafka development and I am currently working KAFKA-8455
> . Since it is an
> external facing change, it needs a KIP AFAIK.
>
> Can someone give me necessary permission in the WIKI to create KIPs,
> please?
>
> Thank you in advance.
>
> Regards,
> Cem Ozen
>


Request to be given permission in the WIKI to create KIPs

2019-06-16 Thread Cem ÖZEN
Hello,

I am new to Kafka development and I am currently working KAFKA-8455
. Since it is an external
facing change, it needs a KIP AFAIK.

Can someone give me necessary permission in the WIKI to create KIPs, please?

Thank you in advance.

Regards,
Cem Ozen


Re: Posted a new article about Kafka Streams

2019-06-16 Thread Paul Whalen
I've only skimmed it so far, but great job!  The community is in serious
need of more examples of the Processor API, there really isn't that much
out there.

One thing I did notice: the iterator you get from kvStore.all() ought to be
closed to release resources when you're done with it.  This matters when
the underlying store is RocksDB, which as I understand it, allocates
additional memory off heap to iterate.  I see this bug everywhere, after
writing it many times myself over the course of many months :). it's too
bad the API can't be more clear, but I guess there's not a ton you can do
in Java.  People think about this kind of thing for DB calls, but when
you're using something that's basically a HashMap you really don't think of
it at all.

Side plug for KIP-401 since you're using the Processor API, it would be
interesting to hear on that discussion thread if you find it useful (
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97553756).
It seems like there's soft interest, but maybe not yet enough to push it
over the finish line.

Again, great job!

Paul

On Fri, Jun 14, 2019 at 10:33 AM Development  wrote:

> Bad link:
>
> https://medium.com/@daniyaryeralin/utilizing-kafka-streams-processor-api-and-implementing-custom-aggregator-6cb23d00eaa7
>
> > On Jun 14, 2019, at 11:07 AM, Development  wrote:
> >
> > Hello Kafka Dev community,
> >
> > I wrote an article on implementing a custom transformer using Processor
> API for Kafka Streams!
> >
> https://medium.com/@daniyaryeralin/utilizing-kafka-streams-processor-api-and-implementing-custom-aggregation-f6a4a6c376be
> <
> https://medium.com/@daniyaryeralin/utilizing-kafka-streams-processor-api-and-implementing-custom-aggregation-f6a4a6c376be
> >
> > Feel free to leave a feedback and/or corrections if I wrote something
> silly :)
> >
> > Thank you!
> >
> > Best,
> > Daniyar Yeralin
>
>


Re: message.max.bytes

2019-06-16 Thread M. Manna
Can you see anything from server logs whether this has been purged ? As far
as I can remember, you are not supposed to just make it smaller without
future issues.

Let us here know what you can find from logs. Also, you should think about
configuring request and server level max size to avoid such issues.

Thanks,

Thanks,

On Sun, 16 Jun 2019 at 11:39, 王双双  wrote:

> Hello everyone,
> I have recently encountered a problem with the parameter
> 'message.max.bytes'. Here are the details: There are 1000 messages that I
> have not consumed, each message is 7KB, and a log under
>  the topic is 106MB. I altered the parameter 'message.max.bytes' from 4MB
> to 1MB, then I restarted the kafka server. I checked the message in each
> partition, unfortunately, the message that have not been consumed
> disappeared.
> Note: my kafka  server version is 0.10.0.1, I created a topic with 3
> replicas and 6 partitions.
> Does anyone know where the disappeared message gone? Can I recover this?
> I appreciate it if anyone replies to me soon.
>
>
> Best regards.
> Wang


message.max.bytes

2019-06-16 Thread 王双双
Hello everyone,
I have recently encountered a problem with the parameter 'message.max.bytes'. 
Here are the details: There are 1000 messages that I have not consumed, each 
message is 7KB, and a log under
 the topic is 106MB. I altered the parameter 'message.max.bytes' from 4MB to 
1MB, then I restarted the kafka server. I checked the message in each 
partition, unfortunately, the message that have not been consumed disappeared.
Note: my kafka  server version is 0.10.0.1, I created a topic with 3 replicas 
and 6 partitions. 
Does anyone know where the disappeared message gone? Can I recover this?
I appreciate it if anyone replies to me soon.


Best regards.
Wang

Cant do integration testing

2019-06-16 Thread Dulvin Witharane
Hi,

When I try /gradle integrationTest on a fresh trunk build, it fails. I tried 
this because I was trying to create a pull request, and that was also facing 
build issues.

Is there anything I should look before opening the pull request. Working on 
MacOs

Regards,
Dulvin

Build failed in Jenkins: kafka-trunk-jdk8 #3728

2019-06-16 Thread Apache Jenkins Server
See 


Changes:

[cshapi] MINOR: Fix expected output in Streams quickstart

--
[...truncated 2.51 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Created] (KAFKA-8545) Remove legacy ZkUtils

2019-06-16 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-8545:
--

 Summary: Remove legacy ZkUtils
 Key: KAFKA-8545
 URL: https://issues.apache.org/jira/browse/KAFKA-8545
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 2.4.0


ZkUtils is not used by the broker, has been deprecated since 2.0.0 and it was 
never intended as a public API. We should remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #3727

2019-06-16 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-8544) Remove legacy kafka.admin.AdminClient

2019-06-16 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-8544:
--

 Summary: Remove legacy kafka.admin.AdminClient
 Key: KAFKA-8544
 URL: https://issues.apache.org/jira/browse/KAFKA-8544
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 2.4.0


It has been deprecated since 0.11.0, it was never meant as a publicly
supported API and people should use
`org.apache.kafka.clients.admin.AdminClient` instead. Its presence
causes confusion and people still use them accidentally at times.

`BrokerApiVersionsCommand` uses one method that is not available
in `org.apache.kafka.clients.admin.AdminClient`, we inline it for now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8543) Optimize Fetch thread allocation strategy to ensure full utilization of Fetch thread resources

2019-06-16 Thread zhaobo (JIRA)
zhaobo created KAFKA-8543:
-

 Summary: Optimize Fetch thread allocation strategy to ensure full 
utilization of Fetch thread resources
 Key: KAFKA-8543
 URL: https://issues.apache.org/jira/browse/KAFKA-8543
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.2.0
Reporter: zhaobo


The Follow TP on the Broker needs to fetch data from the Leader broker. The 
original fetch thread allocation strategy has two areas that can be optimized:
(1)The configuration file has a parameter "num.consumer.fetchers" which limits 
the number of threads fetching data from a source broker. In fact, Fetch 
resources are not fully used, and only a part of the threads will be used with 
a high probability. When the traffic of the Leader TP increases, the fetch 
capability may be bottlenecked.
(2)The number of TPs responsible for each Fetch thread may be uneven.
Improve: We propose a polling allocation method to ensure that Fetch thread 
resources can be fully utilized. And the number of TPs responsible for each 
fetch thread is balanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)