Re: [jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2016-03-21 Thread Gerard Klijs
I had good experiences using the vagrant setup as it is on a mac, but did
had to change some things. We are using docker now. I'm not sure about the
general preference, but I would like a docker compose over the vagrant
setup. Don't know if you really want it Kafka itself, and to give it
support through.

On Mon, Mar 21, 2016 at 9:43 PM Ewen Cheslack-Postava (JIRA) <
j...@apache.org> wrote:

>
> [
> https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205083#comment-15205083
> ]
>
> Ewen Cheslack-Postava commented on KAFKA-1173:
> --
>
> [~gwenshap] Maybe? I had been thinking of our Vagrantfile as a tool for
> Kafka developers. Technically I guess it gets shipped with the source
> version. It doesn't get shipped with the binary versions afaik, which may
> be confusing.
>
> I guess it's also a question of whether we want to treat it as
> "supported"...
>
> > Using Vagrant to get up and running with Apache Kafka
> > -
> >
> > Key: KAFKA-1173
> > URL: https://issues.apache.org/jira/browse/KAFKA-1173
> > Project: Kafka
> >  Issue Type: Improvement
> >Reporter: Joe Stein
> >Assignee: Ewen Cheslack-Postava
> > Fix For: 0.9.0.0
> >
> > Attachments: KAFKA-1173-JMX.patch, KAFKA-1173.patch,
> KAFKA-1173_2013-12-07_12:07:55.patch, KAFKA-1173_2014-11-11_13:50:55.patch,
> KAFKA-1173_2014-11-12_11:32:09.patch, KAFKA-1173_2014-11-18_16:01:33.patch
> >
> >
> > Vagrant has been getting a lot of pickup in the tech communities.  I
> have found it very useful for development and testing and working with a
> few clients now using it to help virtualize their environments in
> repeatable ways.
> > Using Vagrant to get up and running.
> > For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> > 1) Install Vagrant [
> http://www.vagrantup.com/](http://www.vagrantup.com/)
> > 2) Install Virtual Box [
> https://www.virtualbox.org/](https://www.virtualbox.org/)
> > In the main kafka folder
> > 1) ./sbt update
> > 2) ./sbt package
> > 3) ./sbt assembly-package-dependency
> > 4) vagrant up
> > once this is done
> > * Zookeeper will be running 192.168.50.5
> > * Broker 1 on 192.168.50.10
> > * Broker 2 on 192.168.50.20
> > * Broker 3 on 192.168.50.30
> > When you are all up and running you will be back at a command brompt.
> > If you want you can login to the machines using vagrant shh
>  but you don't need to.
> > You can access the brokers and zookeeper by their IP
> > e.g.
> > bin/kafka-console-producer.sh --broker-list 192.168.50.10:9092,
> 192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> > bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic
> sandbox --from-beginning
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>


Jenkins build is back to normal : kafka-trunk-jdk7 #1135

2016-03-21 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #467

2016-03-21 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3412: multiple asynchronous commits causes send failures

--
[...truncated 1585 lines...]

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.messag

[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205789#comment-15205789
 ] 

Jiangjie Qin commented on KAFKA-3442:
-

[~dana.powers] Thanks for reporting the issue. I just submitted the patch. Can 
you give it a try?

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3442:

Status: Patch Available  (was: Open)

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205785#comment-15205785
 ] 

ASF GitHub Bot commented on KAFKA-3442:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1112

KAFKA-3442: Fix FileMessageSet iterator.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3442

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1112.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1112


commit 3ca0e390aabd31dd178991da77373c44e05a4044
Author: Jiangjie Qin 
Date:   2016-03-22T04:44:45Z

KAFKA-3442: Fix FileMessageSet iterator.




> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3442: Fix FileMessageSet iterator.

2016-03-21 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1112

KAFKA-3442: Fix FileMessageSet iterator.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3442

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1112.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1112


commit 3ca0e390aabd31dd178991da77373c44e05a4044
Author: Jiangjie Qin 
Date:   2016-03-22T04:44:45Z

KAFKA-3442: Fix FileMessageSet iterator.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3428) Remove metadata sync bottleneck from mirrormaker's producer

2016-03-21 Thread Maysam Yabandeh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maysam Yabandeh updated KAFKA-3428:
---
 Reviewer: Ismael Juma
Fix Version/s: 0.10.0.0
Affects Version/s: 0.9.0.1
   Status: Patch Available  (was: Open)

> Remove metadata sync bottleneck from mirrormaker's producer
> ---
>
> Key: KAFKA-3428
> URL: https://issues.apache.org/jira/browse/KAFKA-3428
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Maysam Yabandeh
> Fix For: 0.10.0.0
>
>
> Due to sync on the single producer, MM in a setup with 32 consumer threads 
> could not send more than 
> 358k msg/sec hence not being able to saturate the NIC. Profiling showed the 
> producer.send takes 0.080 ms in average, which explains the bottleneck of 
> 358k msg/sec. The following explains the bottleneck in producer.send and 
> suggests how to improve it.
> Current impl of MM relies on a single reducer. For EACH message, the 
> producer.send() calls waitOnMetadata which runs the following synchronized 
> method
> {code}
> // add topic to metadata topic list if it is not there already.
> if (!this.metadata.containsTopic(topic))
> this.metadata.add(topic);
> {code}
> Although the code is mostly noop, since containsTopic is synchronized it 
> becomes the bottleneck in MM.
> Profiling highlights this bottleneck:
> {code}
> 100.0% - 65,539 ms kafka.tools.MirrorMaker$MirrorMakerThread.run
>   18.9% - 12,403 ms org.apache.kafka.clients.producer.KafkaProducer.send
>   13.8% - 9,056 ms 
> org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata
>   12.1% - 7,933 ms org.apache.kafka.clients.Metadata.containsTopic
>   1.7% - 1,088 ms org.apache.kafka.clients.Metadata.fetch
>   2.6% - 1,729 ms org.apache.kafka.clients.Metadata.fetch
>   2.2% - 1,442 ms 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append
> {code}
> After replacing this bottleneck with a kind of noop, another run of the 
> profiler shows that fetch is the next bottleneck:
> {code}
> org.xerial.snappy.SnappyNative.arrayCopy   132 s (54 %)   n/a n/a
>   java.lang.Thread.run 50,776 ms (21 %)   n/a n/a
>   org.apache.kafka.clients.Metadata.fetch  20,881 ms (8 %)n/a 
> n/a
>   6.8% - 16,546 ms 
> org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata
>   6.8% - 16,546 ms org.apache.kafka.clients.producer.KafkaProducer.send
>   6.8% - 16,546 ms kafka.tools.MirrorMaker$MirrorMakerProducer.send
> {code}
> however the fetch method does not need to be syncronized
> {code}
> public synchronized Cluster fetch() {
> return this.cluster;
> }
> {code}
> removing sync from the fetch method shows that bottleneck is disappeared:
> {code}
> org.xerial.snappy.SnappyNative.arrayCopy   249 s (78 %)   n/a n/a
>   org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel  
>  24,489 ms (7 %)n/a n/a
>   org.xerial.snappy.SnappyNative.rawUncompress 17,024 ms (5 %)
> n/a n/a
>   org.apache.kafka.clients.producer.internals.RecordAccumulator.append
>  13,817 ms (4 %)n/a n/a
>   4.3% - 13,817 ms org.apache.kafka.clients.producer.KafkaProducer.send
> {code}
> Internally we have applied a patch to remove this bottleneck. The patch does 
> the following:
> 1. replace HashSet with a concurrent hash set
> 2. remove sync from containsTopic and fetch
> 3. pass a replica of topics to getClusterForCurrentTopics since this 
> synchronized method access topics at two locations and topics being hanged in 
> the middle might mess with the semantics.
> Any interest in applying this patch? Any alternative suggestions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3428) Remove metadata sync bottleneck from mirrormaker's producer

2016-03-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205730#comment-15205730
 ] 

ASF GitHub Bot commented on KAFKA-3428:
---

GitHub user maysamyabandeh opened a pull request:

https://github.com/apache/kafka/pull/

KAFKA-3428 Remove metadata sync bottleneck from mirrormaker's producer

Repalce topics with a concurrent hashset so it would not require to be
modified inside a synchrnoized method.
Make cluster a volatile varialble so fetch which just returns the
pointer does not have to be synchrnized.

@ijuma @becketqin

the contribution is my original work and that i license the work to the 
project under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maysamyabandeh/kafka KAFKA-3428

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #


commit 795b758977915b5a9098c7722f61b8ca5d318577
Author: Maysam Yabandeh 
Date:   2016-03-22T03:31:23Z

Repalce topics with a concurrent hashset so it would not require to be
modified inside a synchrnoized method.
Make cluster a volatile varialble so fetch which just returns the
pointer does not have to be synchrnized.




> Remove metadata sync bottleneck from mirrormaker's producer
> ---
>
> Key: KAFKA-3428
> URL: https://issues.apache.org/jira/browse/KAFKA-3428
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Maysam Yabandeh
>
> Due to sync on the single producer, MM in a setup with 32 consumer threads 
> could not send more than 
> 358k msg/sec hence not being able to saturate the NIC. Profiling showed the 
> producer.send takes 0.080 ms in average, which explains the bottleneck of 
> 358k msg/sec. The following explains the bottleneck in producer.send and 
> suggests how to improve it.
> Current impl of MM relies on a single reducer. For EACH message, the 
> producer.send() calls waitOnMetadata which runs the following synchronized 
> method
> {code}
> // add topic to metadata topic list if it is not there already.
> if (!this.metadata.containsTopic(topic))
> this.metadata.add(topic);
> {code}
> Although the code is mostly noop, since containsTopic is synchronized it 
> becomes the bottleneck in MM.
> Profiling highlights this bottleneck:
> {code}
> 100.0% - 65,539 ms kafka.tools.MirrorMaker$MirrorMakerThread.run
>   18.9% - 12,403 ms org.apache.kafka.clients.producer.KafkaProducer.send
>   13.8% - 9,056 ms 
> org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata
>   12.1% - 7,933 ms org.apache.kafka.clients.Metadata.containsTopic
>   1.7% - 1,088 ms org.apache.kafka.clients.Metadata.fetch
>   2.6% - 1,729 ms org.apache.kafka.clients.Metadata.fetch
>   2.2% - 1,442 ms 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append
> {code}
> After replacing this bottleneck with a kind of noop, another run of the 
> profiler shows that fetch is the next bottleneck:
> {code}
> org.xerial.snappy.SnappyNative.arrayCopy   132 s (54 %)   n/a n/a
>   java.lang.Thread.run 50,776 ms (21 %)   n/a n/a
>   org.apache.kafka.clients.Metadata.fetch  20,881 ms (8 %)n/a 
> n/a
>   6.8% - 16,546 ms 
> org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata
>   6.8% - 16,546 ms org.apache.kafka.clients.producer.KafkaProducer.send
>   6.8% - 16,546 ms kafka.tools.MirrorMaker$MirrorMakerProducer.send
> {code}
> however the fetch method does not need to be syncronized
> {code}
> public synchronized Cluster fetch() {
> return this.cluster;
> }
> {code}
> removing sync from the fetch method shows that bottleneck is disappeared:
> {code}
> org.xerial.snappy.SnappyNative.arrayCopy   249 s (78 %)   n/a n/a
>   org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel  
>  24,489 ms (7 %)n/a n/a
>   org.xerial.snappy.SnappyNative.rawUncompress 17,024 ms (5 %)
> n/a n/a
>   org.apache.kafka.clients.producer.internals.RecordAccumulator.append
>  13,817 ms (4 %)n/a n/a
>   4.3% - 13,817 ms org.apache.kafka.clients.producer.KafkaProducer.send
> {code}
> Internally we have applied a patch to remove this bottleneck. The patch does 
> the following:
> 1. replace HashSet with a concurrent hash set
> 2. remove sync from containsTopic and fetch
> 3. pass a replica of topics to getClusterForCurrentTopics since this 
> synchronized method access topics at two locations and topics being hanged in 
> the mi

[GitHub] kafka pull request: KAFKA-3428 Remove metadata sync bottleneck fro...

2016-03-21 Thread maysamyabandeh
GitHub user maysamyabandeh opened a pull request:

https://github.com/apache/kafka/pull/

KAFKA-3428 Remove metadata sync bottleneck from mirrormaker's producer

Repalce topics with a concurrent hashset so it would not require to be
modified inside a synchrnoized method.
Make cluster a volatile varialble so fetch which just returns the
pointer does not have to be synchrnized.

@ijuma @becketqin

the contribution is my original work and that i license the work to the 
project under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maysamyabandeh/kafka KAFKA-3428

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #


commit 795b758977915b5a9098c7722f61b8ca5d318577
Author: Maysam Yabandeh 
Date:   2016-03-22T03:31:23Z

Repalce topics with a concurrent hashset so it would not require to be
modified inside a synchrnoized method.
Make cluster a volatile varialble so fetch which just returns the
pointer does not have to be synchrnized.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3412) Multiple commitAsync() calls causes SendFailedException in commit callback

2016-03-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205727#comment-15205727
 ] 

ASF GitHub Bot commented on KAFKA-3412:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1108


> Multiple commitAsync() calls causes SendFailedException in commit callback
> --
>
> Key: KAFKA-3412
> URL: https://issues.apache.org/jira/browse/KAFKA-3412
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> If the user calls commitAsync() multiple times between poll() calls, some of 
> them will succeed, but many will be rejected with SendFailedException. This 
> is basically the result of NetworkClient only accepting one request to be 
> sent at a time and the higher level ConsumerNetworkClient not retrying after 
> send failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3412: multiple asynchronous commits caus...

2016-03-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1108


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3412) Multiple commitAsync() calls causes SendFailedException in commit callback

2016-03-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3412.
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1108
[https://github.com/apache/kafka/pull/1108]

> Multiple commitAsync() calls causes SendFailedException in commit callback
> --
>
> Key: KAFKA-3412
> URL: https://issues.apache.org/jira/browse/KAFKA-3412
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> If the user calls commitAsync() multiple times between poll() calls, some of 
> them will succeed, but many will be rejected with SendFailedException. This 
> is basically the result of NetworkClient only accepting one request to be 
> sent at a time and the higher level ConsumerNetworkClient not retrying after 
> send failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Metadata and ACLs wire protocol review (KIP-4)

2016-03-21 Thread Grant Henke
I have 2 patches available for KIP-4 that implement some of the new message
types. Given that these effect the wire protocol, a vote will be required
to make these changes. Before that occurs I would like to get some initial
feedback and discussion. If you could review and discuss the protocol
changes and even that patches if you like it would be greatly appreciated.
Once the feedback seams to settle for each message type, I can hold a vote.
Metadata:

   - Protocol WIKI:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-MetadataSchema
   - PR: https://github.com/apache/kafka/pull/1095

ACLs

   - Protocol WIKI:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-ACLAdminSchema
   - PR: https://github.com/apache/kafka/pull/1005

Shortly I will update with the config protocol and patch as well.  I am
also looking for feedback on the blocking/async implementation. That thread
is here:
http://search-hadoop.com/m/uyzND1T0kBTCy4yU1&subj=KIP+4+Wiki+Update

Thank you,
Grant

-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[jira] [Created] (KAFKA-3444) Figure out when to bump the version on release-candidate artifacts

2016-03-21 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3444:
---

 Summary: Figure out when to bump the version on release-candidate 
artifacts
 Key: KAFKA-3444
 URL: https://issues.apache.org/jira/browse/KAFKA-3444
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Priority: Blocker
 Fix For: 0.10.1.0


Currently we remove the "-SNAPSHOT" marker immediately upon branching. Which 
means that our release artifacts are all released to maven repos with the same 
version. Which is apparently challenging for projects that depend on Kafka to 
test with the release artifacts.

We need to revisit, discuss and maybe improve the process for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205655#comment-15205655
 ] 

Dana Powers commented on KAFKA-3442:


In all prior broker releases clients check the response for a "partial" message 
-- i.e. the MessageSetSize is less than the MessageSize. This follows this 
statement in the protocol wiki:

"As an optimization the server is allowed to return a partial message at the 
end of the message set. Clients should handle this case."

So in this case the client is checking for a partial message, not an error code.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205640#comment-15205640
 ] 

Jiangjie Qin edited comment on KAFKA-3442 at 3/22/16 2:19 AM:
--

[~junrao] Yes, that is the problem. I will submit a patch shortly.


was (Author: becket_qin):
@junrao Yes, that is the problem. I will submit a patch shortly.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205640#comment-15205640
 ] 

Jiangjie Qin commented on KAFKA-3442:
-

@junrao Yes, that is the problem. I will submit a patch shortly.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned KAFKA-3442:
---

Assignee: Jiangjie Qin

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: update new version in additional places

2016-03-21 Thread gwenshap
Github user gwenshap closed the pull request at:

https://github.com/apache/kafka/pull/1110


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205631#comment-15205631
 ] 

Jun Rao commented on KAFKA-3442:


[~becket_qin], yes, the 0.8.2 consumer is able to consume the large message. It 
seems that the issue is the following. For fetch request of v0/v1, we need to 
call FileMessageSet.toMessageFormat() to do the down conversion. In that 
method, we call FileMessageSet.iterator() to iterate all messages. However, the 
iterator seems to have an bug that it only checks "end" at the beginning of 
each message iteration, but not when it reads size and the actual payload of 
the message. Because of this, the iterator can return a message that passes 
"end". That message will then be returned to the consumer after down conversion.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205618#comment-15205618
 ] 

Jiangjie Qin commented on KAFKA-3442:
-

[~junrao] Are you able to consumer the large message with 0.8.2? I think it 
should neither consumed anything nor threw exception as if there is no message. 
This is due to the message down-conversion on the broker side.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205586#comment-15205586
 ] 

Jun Rao commented on KAFKA-3442:


[~becket_qin]: It does seem that I can reproduce the issue.

1. Start 0.10.0 broker.

2. Create a topic that allows max message to be 1.3M
  bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic test3 
--partition 1  --replication-factor 1 --config max.message.bytes=130

3.Publish a single message of 1.2M
  bin/kafka-producer-perf-test.sh --topic test3 --num-records 1 --record-size 
120 --throughput 1000 --producer-props bootstrap.servers=localhost:9092 
max.request.size=200

4. If I run the following from 0.8.2, the consumer seems to be able to consumer 
the oversize message w/o any error.
  bin/kafka-console-consumer.sh --from-beginning --topic test3 --zookeeper 
localhost:2181

   If I run the same command from 0.10.0, the consumer errors out.
  bin/kafka-console-consumer.sh --from-beginning --topic test3 --zookeeper 
localhost:2181
[2016-03-21 17:22:16,506] ERROR Error processing message, terminating consumer 
process:  (kafka.tools.ConsoleConsumer$)
kafka.common.MessageSizeTooLargeException: Found a message larger than the 
maximum fetch size of this consumer on topic test3 partition 0 at fetch offset 
0. Increase the fetch size, or decrease the maximum message size the broker 
will allow.
at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:90)
at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:33)
at 
kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
at kafka.consumer.OldConsumer.receive(BaseConsumer.scala:102)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:113)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:73)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:51)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)


> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3248) AdminClient Blocks Forever in send Method

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3248:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.1

> AdminClient Blocks Forever in send Method
> -
>
> Key: KAFKA-3248
> URL: https://issues.apache.org/jira/browse/KAFKA-3248
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.0
>Reporter: John Tylwalk
>Assignee: Warren Green
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> AdminClient will block forever when performing operations involving the 
> {{send()}} method, due to usage of 
> {{ConsumerNetworkClient.poll(RequestFuture)}} - which blocks indefinitely.
> Suggested fix is to use {{ConsumerNetworkClient.poll(RequestFuture, long 
> timeout)}} in {{AdminClient.send()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3173) Error while moving some partitions to OnlinePartition state

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3173:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

> Error while moving some partitions to OnlinePartition state 
> 
>
> Key: KAFKA-3173
> URL: https://issues.apache.org/jira/browse/KAFKA-3173
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Critical
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-3173-race-repro.patch
>
>
> We observed another instance of the problem reported in KAFKA-2300, but this 
> time the error appeared in the partition state machine. In KAFKA-2300, we 
> haven't cleaned up the state in {{PartitionStateMachine}} and 
> {{ReplicaStateMachine}} as we do in {{KafkaController}}.
> Here is the stack trace:
> {noformat}
> 2016-01-29 15:26:51,393] ERROR [Partition state machine on Controller 0]: 
> Error while moving some partitions to OnlinePartition state 
> (kafka.controller.PartitionStateMachine)java.lang.IllegalStateException: 
> Controller to broker state change requests batch is not empty while creating 
> a new one. 
> Some LeaderAndIsr state changes Map(0 -> Map(foo-0 -> (LeaderAndIsrInfo:
> (Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:1),ReplicationFactor:1),AllReplicas:0)))
>  might be lostat 
> kafka.controller.ControllerBrokerRequestBatch.newBatch(ControllerChannelManager.scala:254)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:144)
> at 
> kafka.controller.KafkaController.onNewPartitionCreation(KafkaController.scala:517)
> at 
> kafka.controller.KafkaController.onNewTopicCreation(KafkaController.scala:504)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(PartitionStateMachine.scala:437)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.controller.PartitionStateMachine$TopicChangeListener$$anonfun$handleChildChange$1.apply(PartitionStateMachine.scala:419)
> at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)at 
> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:418)
> at 
> org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)at 
> org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3123) Follower Broker cannot start if offsets are already out of range

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3123:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.1

> Follower Broker cannot start if offsets are already out of range
> 
>
> Key: KAFKA-3123
> URL: https://issues.apache.org/jira/browse/KAFKA-3123
> Project: Kafka
>  Issue Type: Bug
>  Components: core, replication
>Affects Versions: 0.9.0.0
>Reporter: Soumyajit Sahu
>Assignee: Neha Narkhede
>Priority: Critical
>  Labels: patch
> Fix For: 0.10.0.1
>
> Attachments: 
> 0001-Fix-Follower-crashes-when-offset-out-of-range-during.patch
>
>
> I was trying to upgrade our test Windows cluster from 0.8.1.1 to 0.9.0 one 
> machine at a time. Our logs have just 2 hours of retention. I had re-imaged 
> the test machine under consideration, and got the following error in loop 
> after starting afresh with 0.9.0 broker:
> [2016-01-19 13:57:28,809] WARN [ReplicaFetcherThread-1-169595708], Replica 
> 15588 for partition [EventLogs4,1] reset its fetch offset from 0 to 
> current leader 169595708's start offset 334086 
> (kafka.server.ReplicaFetcherThread)
> [2016-01-19 13:57:28,809] ERROR [ReplicaFetcherThread-1-169595708], Error 
> getting offset for partition [EventLogs4,1] to broker 169595708 
> (kafka.server.ReplicaFetcherThread)
> java.lang.IllegalStateException: Compaction for partition [EXO_EventLogs4,1] 
> cannot be aborted and paused since it is in LogCleaningPaused state.
>   at 
> kafka.log.LogCleanerManager$$anonfun$abortAndPauseCleaning$1.apply$mcV$sp(LogCleanerManager.scala:149)
>   at 
> kafka.log.LogCleanerManager$$anonfun$abortAndPauseCleaning$1.apply(LogCleanerManager.scala:140)
>   at 
> kafka.log.LogCleanerManager$$anonfun$abortAndPauseCleaning$1.apply(LogCleanerManager.scala:140)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>   at 
> kafka.log.LogCleanerManager.abortAndPauseCleaning(LogCleanerManager.scala:140)
>   at kafka.log.LogCleaner.abortAndPauseCleaning(LogCleaner.scala:141)
>   at kafka.log.LogManager.truncateFullyAndStartAt(LogManager.scala:304)
>   at 
> kafka.server.ReplicaFetcherThread.handleOffsetOutOfRange(ReplicaFetcherThread.scala:185)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:152)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:122)
>   at scala.Option.foreach(Option.scala:236)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:122)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:120)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:120)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:120)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:120)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> I could unblock myself with a code change. I deleted the action for "case s 
> =>" in the LogCleanerManager.scala's abortAndPauseCleaning(). I think we 
> should not throw exception if the state is already LogCleaningAborted or 
> LogCleaningPaused in this function, but instead just let it roll.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205536#comment-15205536
 ] 

Ismael Juma commented on KAFKA-3409:


[~singhashish], are you planning to submit a PR?

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3409:
---
Priority: Blocker  (was: Critical)

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3409:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #466

2016-03-21 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205501#comment-15205501
 ] 

Jiangjie Qin commented on KAFKA-3442:
-

[~dana.powers] Are you expecting a RecordSizeTooLargeException? One change 
after KIP-31/32 is that if down-conversion occurred on the broker side, on 
consumer side the traditional way of detecting the RecordSizeTooLarge no longer 
work. 

In your case, if you the message format on broker is 0.10.0 and your 
FetchRequest version is < v2, the consumer will not get any bytes back. 
However, if you fetch using FetchRequest v2, you will be able to get ~1024 
bytes.

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3443) Support regex topics in addSource() and stream()

2016-03-21 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3443:


 Summary: Support regex topics in addSource() and stream()
 Key: KAFKA-3443
 URL: https://issues.apache.org/jira/browse/KAFKA-3443
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
 Fix For: 0.10.0.1


Currently Kafka Streams only support specific topics in creating source 
streams, while we can leverage consumer's regex subscription to allow regex 
topics as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3419) New consumer javadoc unclear on topic subscription vs partition assignment

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3419:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

> New consumer javadoc unclear on topic subscription vs partition assignment
> --
>
> Key: KAFKA-3419
> URL: https://issues.apache.org/jira/browse/KAFKA-3419
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> We've tried to use the concepts of "topic subscription" and "partition 
> assignment" to distinguish between the old "high-level" and "simple" use 
> cases, but the javadoc is not very clear about this (e.g. there is still a 
> section titled "Subscribing to Specific Partitions"). Additionally, the 
> section on consumer groups is a little hard to follow in regard to the notion 
> of a group subscription.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3418) Add section on detecting consumer failures in new consumer javadoc

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3418:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

> Add section on detecting consumer failures in new consumer javadoc
> --
>
> Key: KAFKA-3418
> URL: https://issues.apache.org/jira/browse/KAFKA-3418
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> There still seems to be a lot of confusion about the design of the poll() 
> loop in regard to consumer liveness. We do mention it in the javadoc, but 
> it's a little hidden and we aren't very clear on what the user should do to 
> (such as tweaking max.poll.records). We should pull this into a separate 
> section (e.g. Jay suggests "Detecting Consumer Failures") and give it a more 
> complete treatment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3434:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

> Add old ConsumerRecord constructor for compatibility
> 
>
> Key: KAFKA-3434
> URL: https://issues.apache.org/jira/browse/KAFKA-3434
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> After KIP-42, several new fields have been added to ConsumerRecord, all of 
> which are passed through the only constructor. It would be nice to add back 
> the old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3431) Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3431:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

> Move `BrokerEndPoint` from `o.a.k.common` to `o.a.k.common.internals`
> -
>
> Key: KAFKA-3431
> URL: https://issues.apache.org/jira/browse/KAFKA-3431
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> As per the following comment, we should move `BrokerEndPoint` from `common` 
> to `common.internals` as it's not public API.
> https://issues.apache.org/jira/browse/KAFKA-2970?focusedCommentId=15157821&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15157821



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205411#comment-15205411
 ] 

Dana Powers edited comment on KAFKA-3442 at 3/22/16 12:13 AM:
--

{code}
# clone kafka-python repo
git clone https://github.com/dpkp/kafka-python.git

# install kafka fixture binaries
tar xzvf kafka_2.10-0.10.0.0.tgz -C servers/0.10.0.0/
mv servers/0.10.0.0/kafka_2.10-0.10.0.0 servers/0.10.0.0/kafka-bin

# you can install other versions of kafka to compare test results
KAFKA_VERSION=0.9.0.1 ./build_integration.sh

# install python test harness
pip install tox

# run just the failing test [replace py27 w/ py## as needed: 
py26,py27,py33,py34,py35]
KAFKA_VERSION=0.10.0.0 tox -e py27 
test/test_consumer_integration.py::TestConsumerIntegration::test_huge_messages
{code}


was (Author: dana.powers):
# clone kafka-python repo
git clone https://github.com/dpkp/kafka-python.git

# install kafka fixture binaries
tar xzvf kafka_2.10-0.10.0.0.tgz -C servers/0.10.0.0/
mv servers/0.10.0.0/kafka_2.10-0.10.0.0 servers/0.10.0.0/kafka-bin

# you can install other versions of kafka to compare test results
KAFKA_VERSION=0.9.0.1 ./build_integration.sh

# install python test harness
pip install tox

# run just the failing test [replace py27 w/ py## as needed: 
py26,py27,py33,py34,py35]
KAFKA_VERSION=0.10.0.0 tox -e py27 
test/test_consumer_integration.py::TestConsumerIntegration::test_huge_messages

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Dana Powers (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205411#comment-15205411
 ] 

Dana Powers commented on KAFKA-3442:


# clone kafka-python repo
git clone https://github.com/dpkp/kafka-python.git

# install kafka fixture binaries
tar xzvf kafka_2.10-0.10.0.0.tgz -C servers/0.10.0.0/
mv servers/0.10.0.0/kafka_2.10-0.10.0.0 servers/0.10.0.0/kafka-bin

# you can install other versions of kafka to compare test results
KAFKA_VERSION=0.9.0.1 ./build_integration.sh

# install python test harness
pip install tox

# run just the failing test [replace py27 w/ py## as needed: 
py26,py27,py33,py34,py35]
KAFKA_VERSION=0.10.0.0 tox -e py27 
test/test_consumer_integration.py::TestConsumerIntegration::test_huge_messages

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3432) Cluster.update() thread-safety

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-3432:
--

Assignee: Ismael Juma

> Cluster.update() thread-safety
> --
>
> Key: KAFKA-3432
> URL: https://issues.apache.org/jira/browse/KAFKA-3432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> A `Cluster.update()` method was introduced during the development of 0.10.0 
> so that `StreamPartitionAssignor` can add internal topics on-the-fly and give 
> the augmented metadata to its underlying grouper.
> `Cluster` was supposed to be immutable after construction and all 
> synchronization happens via the `Metadata` instance. As far as I can see 
> `Cluster.update()` is not thread-safe even though `Cluster` is accessed by 
> multiple threads in some cases (I am not sure about the Streams case). Since 
> this is a public API, it is important to fix this in my opinion.
> A few options I can think of:
> * Since `PartitionAssignor` is an internal class, change 
> `PartitionAssignor.assign` to return a class containing the assignments and 
> optionally an updated cluster. This is straightforward, but I am not sure if 
> it's good enough for the Streams use-case. Can you please confirm [~guozhang]?
> * Pass `Metadata` instead of `Cluster` to `PartitionAssignor.assign`, giving 
> assignors the ability to update the metadata as needed.
> * Make `Cluster` thread-safe in the face of mutations (without relying on 
> synchronization at the `Metadata` level). This is not ideal, KAFKA-3428 shows 
> that the synchronization at `Metadata` level is already too costly for high 
> concurrency situations.
> Thoughts [~guozhang], [~hachikuji]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3432) Cluster.update() thread-safety

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3432:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

> Cluster.update() thread-safety
> --
>
> Key: KAFKA-3432
> URL: https://issues.apache.org/jira/browse/KAFKA-3432
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> A `Cluster.update()` method was introduced during the development of 0.10.0 
> so that `StreamPartitionAssignor` can add internal topics on-the-fly and give 
> the augmented metadata to its underlying grouper.
> `Cluster` was supposed to be immutable after construction and all 
> synchronization happens via the `Metadata` instance. As far as I can see 
> `Cluster.update()` is not thread-safe even though `Cluster` is accessed by 
> multiple threads in some cases (I am not sure about the Streams case). Since 
> this is a public API, it is important to fix this in my opinion.
> A few options I can think of:
> * Since `PartitionAssignor` is an internal class, change 
> `PartitionAssignor.assign` to return a class containing the assignments and 
> optionally an updated cluster. This is straightforward, but I am not sure if 
> it's good enough for the Streams use-case. Can you please confirm [~guozhang]?
> * Pass `Metadata` instead of `Cluster` to `PartitionAssignor.assign`, giving 
> assignors the ability to update the metadata as needed.
> * Make `Cluster` thread-safe in the face of mutations (without relying on 
> synchronization at the `Metadata` level). This is not ideal, KAFKA-3428 shows 
> that the synchronization at `Metadata` level is already too costly for high 
> concurrency situations.
> Thoughts [~guozhang], [~hachikuji]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Build failed in Jenkins: kafka-trunk-jdk7 #1134

2016-03-21 Thread Ismael Juma
In case you were wondering, these failures were due to the fact that the
Gradle build configs in Apache Jenkins got messed up somehow:

https://issues.apache.org/jira/browse/INFRA-11514

Andrew Bayer has fixed the issue.

Ismael

On Mon, Mar 21, 2016 at 8:45 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> Changes:
>
> [me] MINOR: fix documentation version
>
> [me] MINOR: update new version in additional places
>
> --
> Started by an SCM change
> [EnvInject] - Loading node environment variables.
> Building remotely on H10 (docker Ubuntu ubuntu yahoo-not-h2) in workspace <
> https://builds.apache.org/job/kafka-trunk-jdk7/ws/>
>  > git rev-parse --is-inside-work-tree # timeout=10
> Fetching changes from the remote Git repository
>  > git config remote.origin.url
> https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
> Fetching upstream changes from
> https://git-wip-us.apache.org/repos/asf/kafka.git
>  > git --version # timeout=10
>  > git -c core.askpass=true fetch --tags --progress
> https://git-wip-us.apache.org/repos/asf/kafka.git
> +refs/heads/*:refs/remotes/origin/*
>  > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
>  > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
> Checking out Revision 4f0417931a71a974241e685031f2a5f1680e9b51
> (refs/remotes/origin/trunk)
>  > git config core.sparsecheckout # timeout=10
>  > git checkout -f 4f0417931a71a974241e685031f2a5f1680e9b51
>  > git rev-list b6c29e3810bd59f39fa93c429817396cf8c324b7 # timeout=10
> Setting GRADLE_2_4_RC_2_HOME=
> Setting
> JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
> [kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2148391791234199210.sh
> + /bin/gradle
> /tmp/hudson2148391791234199210.sh: line 2: /bin/gradle: No such file or
> directory
> Build step 'Execute shell' marked build as failure
> Recording test results
> Setting GRADLE_2_4_RC_2_HOME=
> Setting
> JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
> ERROR: Step ?Publish JUnit test result report? failed: Test reports were
> found but none of them are new. Did tests run?
> For example, <
> https://builds.apache.org/job/kafka-trunk-jdk7/ws/clients/build/test-results/TEST-org.apache.kafka.clients.ClientUtilsTest.xml>
> is 2 days 21 hr old
>
> Setting GRADLE_2_4_RC_2_HOME=
> Setting
> JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
>


[jira] [Updated] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Dana Powers (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dana Powers updated KAFKA-3442:
---
Priority: Blocker  (was: Major)

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205396#comment-15205396
 ] 

Jun Rao commented on KAFKA-3442:


[~dana.powers], could you share you test? Thanks,

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #465

2016-03-21 Thread Apache Jenkins Server
See 

--
Started by user gwenshap
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-us1 (Ubuntu ubuntu ubuntu-us golang-ppa) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 4f0417931a71a974241e685031f2a5f1680e9b51 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4f0417931a71a974241e685031f2a5f1680e9b51
 > git rev-list 4f0417931a71a974241e685031f2a5f1680e9b51 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4914829301572985255.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 20.728 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5344381407131143524.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.11/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.18/b631d286463ced7cc42ee2171fe3beaed2836823/slf4j-api-1.7.18.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.194 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Re: [VOTE] 0.10.0.0 RC0

2016-03-21 Thread Gwen Shapira
Thank you!

The affected version should be 0.10.0.0
fixVersion will be 0.10.0.0 if you think its a show-stopper for the release
(in this case, make it a blocker too).
If its not a blocker, fixVersion can be either left empty or set to 0.10.1.0

Gwen

On Mon, Mar 21, 2016 at 4:47 PM, Dana Powers  wrote:

> I filed a bug re max.partition.fetch.bytes here:
> https://issues.apache.org/jira/browse/KAFKA-3442
>
> Would it be useful to create a 0.10.0.0-rc0 version for JIRA tickets? Or
> should issues just get filed against 0.10.0.0 ?
>
> -Dana
>
>
> On Mon, Mar 21, 2016 at 1:53 PM, Gwen Shapira  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 0.10.0.0.
> > This is a major release that includes: (1) New message format including
> > timestamps (2) client interceptor API (3) Kafka Streams. Since this is a
> > major release, we will give people more time to try it out and give
> > feedback.
> >
> > Release notes for the 0.10.0.0 release:
> > http://home.apache.org/~gwenshap/0.10.0.0-rc0/RELEASE_NOTES.HTML
> >
> > *** Please download, test and vote by Monday, March 28, 9am PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~gwenshap/0.10.0.0-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * scala-doc
> > http://home.apache.org/~gwenshap/0.10.0.0-rc0/scaladoc
> >
> > * java-doc
> > http://home.apache.org/~gwenshap/0.10.0.0-rc0/javadoc/
> >
> > * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
> >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=72fd542633a95a8bd5bdc9fdca56042b643cb4b0
> >
> > * Documentation:
> > http://kafka.apache.org/0100/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0100/protocol.html
> >
> > /**
> >
> > Thanks,
> >
> > Gwen
> >
>


[jira] [Commented] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205387#comment-15205387
 ] 

Ismael Juma commented on KAFKA-3442:


[~junrao], do you think this is a result of KIP-31/32?

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3442:
---
Fix Version/s: 0.10.0.0

> FetchResponse size exceeds max.partition.fetch.bytes
> 
>
> Key: KAFKA-3442
> URL: https://issues.apache.org/jira/browse/KAFKA-3442
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Dana Powers
> Fix For: 0.10.0.0
>
>
> Produce 1 byte message to topic foobar
> Fetch foobar w/ max.partition.fetch.bytes=1024
> Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
> this test, but 0.10 FetchResponse has full message, exceeding the max 
> specified in the FetchRequest.
> I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.10.0.0 RC0

2016-03-21 Thread Dana Powers
I filed a bug re max.partition.fetch.bytes here:
https://issues.apache.org/jira/browse/KAFKA-3442

Would it be useful to create a 0.10.0.0-rc0 version for JIRA tickets? Or
should issues just get filed against 0.10.0.0 ?

-Dana


On Mon, Mar 21, 2016 at 1:53 PM, Gwen Shapira  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 0.10.0.0.
> This is a major release that includes: (1) New message format including
> timestamps (2) client interceptor API (3) Kafka Streams. Since this is a
> major release, we will give people more time to try it out and give
> feedback.
>
> Release notes for the 0.10.0.0 release:
> http://home.apache.org/~gwenshap/0.10.0.0-rc0/RELEASE_NOTES.HTML
>
> *** Please download, test and vote by Monday, March 28, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~gwenshap/0.10.0.0-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * scala-doc
> http://home.apache.org/~gwenshap/0.10.0.0-rc0/scaladoc
>
> * java-doc
> http://home.apache.org/~gwenshap/0.10.0.0-rc0/javadoc/
>
> * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=72fd542633a95a8bd5bdc9fdca56042b643cb4b0
>
> * Documentation:
> http://kafka.apache.org/0100/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0100/protocol.html
>
> /**
>
> Thanks,
>
> Gwen
>


[jira] [Created] (KAFKA-3442) FetchResponse size exceeds max.partition.fetch.bytes

2016-03-21 Thread Dana Powers (JIRA)
Dana Powers created KAFKA-3442:
--

 Summary: FetchResponse size exceeds max.partition.fetch.bytes
 Key: KAFKA-3442
 URL: https://issues.apache.org/jira/browse/KAFKA-3442
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.0
Reporter: Dana Powers


Produce 1 byte message to topic foobar
Fetch foobar w/ max.partition.fetch.bytes=1024

Test expects to receive a truncated message (~1024 bytes). 0.8 and 0.9 pass 
this test, but 0.10 FetchResponse has full message, exceeding the max specified 
in the FetchRequest.

I tested with v0 and v1 apis, both fail. Have not tested w/ v2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KStreams Partition Assignment

2016-03-21 Thread Guozhang Wang
Hi Mike,

Kafka Streams is suitable for distributed environment. You can read this
section of the web docs for more details:

http://docs.confluent.io/2.1.0-alpha1/streams/architecture.html#parallelism-model

As for the Kafka Streams' customized partition assignor under the Kafka
group management protocol, you can find more details here:

http://docs.confluent.io/2.1.0-alpha1/streams/developer-guide.html#partition-grouper

In the long term, the subscription that coordinator sends to the selected
leader includes metadata about the current tasks information, including
which instances are hosting these tasks (if it is the first time a
rebalance is triggered, then it is empty). The assignor first triggers the
user-customizable partition grouper given the list of partitions to get the
new list of tasks, then tries to assign tasks to the known "host"
(identified by UUID in the code), with the principle that:

1) If the task exists already (i.e. it is not newly created from the
partition grouper), then tries to assign it to the current hosting instance
if the host is still alive.
2) otherwise if the task has standby replicas on other hosts, then try to
assign it to the hosting instance of the standby replicas.
3) otherwise tries to assign to the least-loaded hosting instance.

Then the leader groups the task -> host mapping into the partition ->
consumer-member mapping (note that a single host, i.e. machine, can have
multiple consumers), and sends it back to the coordinator, which will then
be propagated to other members. Upon receiving the assignment, these
instances will get the tasks from the metadata and the assigned partitions,
and start initializing / running the tasks.

Guozhang


On Sun, Mar 20, 2016 at 7:12 AM, Michael D. Coon 
wrote:

> I'm evaluating whether the KafkaStreams API will be something we can use
> on my current project. Namely, we want to be able to distribute the
> consumers on a Mesos/YARN cluster. It's not entirely clear to me in the
> code what is deciding which partitions get assigned at runtime and whether
> this is intended for a distributed application or just a multi-threaded
> environment.
> I get that the consumer coordinator will get reassignments when group
> participation changes; however, in looking through the
> StreamPartitionAssignor code, it's not clear to me what is happening in the
> assign method. It looks like to me like subscriptions are coming in from
> the consumer coordinator, presumably whose assignments are derived from the
> lead brokers for the topics of interest. Those subscriptions are then
> translated into co-partitioned groups of clients. Once that's complete, it
> hands off the co-partitioned groups to the StreamThread's partitionGrouper
> to do the work of assigning the partitions to each co-partitioned group.
> The DefaultPartitionGrouper code, starting on line 57, simply does a 1-up
> assigning of partition to group. How will this actually work with
> distributed stream consumers if it's always going to be assigning the
> partition as a 1-up sequence local to that particular consumer? Shouldn't
> it use the assigned partition that is coming back from the
> ConsumerCoordinator? I'm struggling to understand the layers but I need to
> in order to know whether this implementation is going to work for us. If
> the PartitionGroupAssignor's default is just meant for single-node
> multithreaded use, that's fine as long as I can inject my own
> implementation. But I would still need to understand what is happening at
> the StreamPartitionAssignor layer more clearly. Any info, design docs,
> in-progress wiki's would be most appreciated if the answer is too in-depth
> for an email discussion. Thanks and I really love what you guys are doing
> with this!
> Mike




-- 
-- Guozhang


Re: kafka-streams: Using custom partitioner with higher level DSL constructs?

2016-03-21 Thread Guozhang Wang
Also, have you looked at Kafka Connect released in 0.9? It has a MySQL
binlog connector implementation in progress, just thinking maybe you would
be interested to check it out and see if there are any feedbacks that you
want to give.

https://github.com/wushujames/kafka-mysql-connector

Guozhang

On Fri, Mar 18, 2016 at 7:38 AM, Ben Osheroff 
wrote:

> (lemme know if this belongs on the users email list, I'm not sure where
> API questions fall)
>
> Hi, I'm Ben Osheroff, I wrote Maxwell
> (http://github.com/zendesk/maxwell) and have been prototyping an engine
> to do arbitrary denormalizations of Maxwell's CDC events based on the
> kafka-streams library; the elevator pitch is that you can write SQL
> joins which the engine compiles down to stream-joins and aggregations
> and such.
>
> Maxwell partitions its stream by mysql database name, which means that
> to do stream-joins I need to implement the same (custom) partitioning
> algorithm somewhere in my stream processor.  I'd prefer not drop down to
> the lower level `addSink()` library calls if possible, and I can't
> figure out how to mix and match the lower level alls with the higher
> level DSL (map/filter/etc).
>
> So I guess I have two questions:
>
> 1. Is it somehow possible to add a custom `Sink` to an otherwise high
> level stream topology?  There's no obvious way to retrieve the topology
> names that I can see.
>
> 2. If not, I'd like to make a feature request that the various stream
> building functions (.to, .through) accept an optional
> StreamPartitioner.
>
> 3. Any other ideas about how to pull this off?
>
> Thanks!
>
>
> - Ben Osheroff
> zendesk.com
>



-- 
-- Guozhang


Re: kafka-streams: Using custom partitioner with higher level DSL constructs?

2016-03-21 Thread Guozhang Wang
Hello Ben,

1. Currently Kafka Streams high-level DSL does not take the
StreamPartitioner yet, please feel free to file a JIRA so that we can keep
track and discuss of whether / how to incorporate it into Kafka Streams DSL.

2. As for now, you can do two work arounds:

1) use `process()` function to manually write to Kafka with customized
partitioning:

RecordCollector collector = ((RecordCollector.Supplier)
context).recordCollector();
collector.send(*...*);


2), you can call addSink() in KStreamBuilder as well since it is extending
the TopologyBuilder, but you need to know the upstream processor name (i.e.
the parent processor name) which is auto-created in KStreamImpl as
"ProcessorType-IndexSuffix", which is a bit hacky.

Guozhang


On Fri, Mar 18, 2016 at 7:38 AM, Ben Osheroff 
wrote:

> (lemme know if this belongs on the users email list, I'm not sure where
> API questions fall)
>
> Hi, I'm Ben Osheroff, I wrote Maxwell
> (http://github.com/zendesk/maxwell) and have been prototyping an engine
> to do arbitrary denormalizations of Maxwell's CDC events based on the
> kafka-streams library; the elevator pitch is that you can write SQL
> joins which the engine compiles down to stream-joins and aggregations
> and such.
>
> Maxwell partitions its stream by mysql database name, which means that
> to do stream-joins I need to implement the same (custom) partitioning
> algorithm somewhere in my stream processor.  I'd prefer not drop down to
> the lower level `addSink()` library calls if possible, and I can't
> figure out how to mix and match the lower level alls with the higher
> level DSL (map/filter/etc).
>
> So I guess I have two questions:
>
> 1. Is it somehow possible to add a custom `Sink` to an otherwise high
> level stream topology?  There's no obvious way to retrieve the topology
> names that I can see.
>
> 2. If not, I'd like to make a feature request that the various stream
> building functions (.to, .through) accept an optional
> StreamPartitioner.
>
> 3. Any other ideas about how to pull this off?
>
> Thanks!
>
>
> - Ben Osheroff
> zendesk.com
>



-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk8 #464

2016-03-21 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: fix documentation version

[me] MINOR: update new version in additional places

--
[...truncated 1610 lines...]
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-io_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-uihandler_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-swing-tabcontrol_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-visual_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-loaders_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-search_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-favorites_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-progress-ui_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-masterfs-nio2_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-execution_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-keyring-fallback_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-lib-uihandler_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-text_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-execution_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-editor-mimelookup_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-output2_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-nodes_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-util-enumerations_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-progress_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-core-kit_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-awt_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-applemenu_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-autoupdate-ui_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-templates_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-util-enumerations_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-autoupdate-ui_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-editor-mimelookup_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-annotations-common_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-masterfs-nio2_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-multitabs_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-keyring-fallback_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-sendopts_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-applemenu_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-actions_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-ui_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-nodes_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-annotations-common_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-spi-quicksearch_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-compat_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-autoupdate-services_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-queries_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-text_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-windows_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-sampler_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-loaders_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-print_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-swing-plaf_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-editor-mimelookup-impl_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-javahelp_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-dialogs_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-editor-mimelookup-impl_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-javahelp_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-keyring-impl_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-settings_zh_CN.jar
jdk1.8.0_45/lib/visualvm/pl

[jira] [Commented] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-21 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205119#comment-15205119
 ] 

Gwen Shapira commented on KAFKA-3441:
-

The most obvious part was fixed here:
https://github.com/apache/kafka/pull/1107

As Grant pointed out, we still need to fix:
* Quickstart links
* Javadoc (which will also require copying the javadoc to the docs site, it 
isn't there currently)

> 0.10.0 documentation still says "0.9.0"
> ---
>
> Key: KAFKA-3441
> URL: https://issues.apache.org/jira/browse/KAFKA-3441
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Priority: Blocker
>
> See here: 
> https://github.com/apache/kafka/blob/trunk/docs/documentation.html
> And here:
> http://kafka.apache.org/0100/documentation.html
> This should be fixed in both trunk and 0.10.0 branch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] 0.10.0.0 RC0

2016-03-21 Thread Gwen Shapira
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 0.10.0.0.
This is a major release that includes: (1) New message format including
timestamps (2) client interceptor API (3) Kafka Streams. Since this is a
major release, we will give people more time to try it out and give
feedback.

Release notes for the 0.10.0.0 release:
http://home.apache.org/~gwenshap/0.10.0.0-rc0/RELEASE_NOTES.HTML

*** Please download, test and vote by Monday, March 28, 9am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~gwenshap/0.10.0.0-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* scala-doc
http://home.apache.org/~gwenshap/0.10.0.0-rc0/scaladoc

* java-doc
http://home.apache.org/~gwenshap/0.10.0.0-rc0/javadoc/

* tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=72fd542633a95a8bd5bdc9fdca56042b643cb4b0

* Documentation:
http://kafka.apache.org/0100/documentation.html

* Protocol:
http://kafka.apache.org/0100/protocol.html

/**

Thanks,

Gwen


Build failed in Jenkins: kafka-trunk-jdk7 #1134

2016-03-21 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: fix documentation version

[me] MINOR: update new version in additional places

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H10 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 4f0417931a71a974241e685031f2a5f1680e9b51 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4f0417931a71a974241e685031f2a5f1680e9b51
 > git rev-list b6c29e3810bd59f39fa93c429817396cf8c324b7 # timeout=10
Setting GRADLE_2_4_RC_2_HOME=
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2148391791234199210.sh
+ /bin/gradle
/tmp/hudson2148391791234199210.sh: line 2: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
Setting GRADLE_2_4_RC_2_HOME=
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 2 days 21 hr old

Setting GRADLE_2_4_RC_2_HOME=
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request: MINOR: update new version in additional places

2016-03-21 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/1110

MINOR: update new version in additional places

matching set of version fixes. @ewencp @junrao 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka minor-fix-version-010

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1110.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1110


commit 59498c03024d10d09e7702957ddb46dec5c71034
Author: Gwen Shapira 
Date:   2016-03-21T20:40:13Z

MINOR: update new version in additional places




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2016-03-21 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205083#comment-15205083
 ] 

Ewen Cheslack-Postava commented on KAFKA-1173:
--

[~gwenshap] Maybe? I had been thinking of our Vagrantfile as a tool for Kafka 
developers. Technically I guess it gets shipped with the source version. It 
doesn't get shipped with the binary versions afaik, which may be confusing.

I guess it's also a question of whether we want to treat it as "supported"...

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1173-JMX.patch, KAFKA-1173.patch, 
> KAFKA-1173_2013-12-07_12:07:55.patch, KAFKA-1173_2014-11-11_13:50:55.patch, 
> KAFKA-1173_2014-11-12_11:32:09.patch, KAFKA-1173_2014-11-18_16:01:33.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: update new version in additional places

2016-03-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1109


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: update new version in additional places

2016-03-21 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/1109

MINOR: update new version in additional places

Note: This goes only to trunk. 0.10.0 branch will need a separate PR with 
different versions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka minor-fix-version-trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1109.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1109


commit 3343ed2b317d3aea91b2ba9c9a2e6ce1b0fe324e
Author: Gwen Shapira 
Date:   2016-03-21T20:23:52Z

MINOR: update new version in additional places




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3283) Consider marking the new consumer as production-ready

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3283:
---
Description: 
Ideally, we would:

* Remove the beta label
* Filling any critical gaps in functionality
* Update the documentation on the old consumers to recommend the new consumer 
(without deprecating the old consumer, however)

The new consumer already looks pretty good in 0.9.0.1 so it's feasible that we 
may be able to do this for 0.10.0.0. 

  was:
Ideally, we would:

* Remove the beta label
* Remove the `Unstable` annotation
* Filling any critical gaps in functionality
* Update the documentation on the old consumers to recommend the new consumer 
(without deprecating the old consumer, however)

The new consumer already looks pretty good in 0.9.0.1 so it's feasible that we 
may be able to do this for 0.10.0.0. 


> Consider marking the new consumer as production-ready
> -
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> The new consumer already looks pretty good in 0.9.0.1 so it's feasible that 
> we may be able to do this for 0.10.0.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3412) Multiple commitAsync() calls causes SendFailedException in commit callback

2016-03-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15205039#comment-15205039
 ] 

ASF GitHub Bot commented on KAFKA-3412:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1108

KAFKA-3412: multiple asynchronous commits causes send failures



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3412

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1108.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1108


commit b296635d9af6fd83b30bf6aecdbcb0f1238f0c9c
Author: Jason Gustafson 
Date:   2016-03-21T20:14:23Z

KAFKA-3412: multiple asynchronous commits causes send failures




> Multiple commitAsync() calls causes SendFailedException in commit callback
> --
>
> Key: KAFKA-3412
> URL: https://issues.apache.org/jira/browse/KAFKA-3412
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> If the user calls commitAsync() multiple times between poll() calls, some of 
> them will succeed, but many will be rejected with SendFailedException. This 
> is basically the result of NetworkClient only accepting one request to be 
> sent at a time and the higher level ConsumerNetworkClient not retrying after 
> send failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3412: multiple asynchronous commits caus...

2016-03-21 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1108

KAFKA-3412: multiple asynchronous commits causes send failures



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3412

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1108.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1108


commit b296635d9af6fd83b30bf6aecdbcb0f1238f0c9c
Author: Jason Gustafson 
Date:   2016-03-21T20:14:23Z

KAFKA-3412: multiple asynchronous commits causes send failures




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: fix documentation version

2016-03-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1107


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: fix documentation version

2016-03-21 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/1107

MINOR: fix documentation version

This will need to be double-committed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka fix-doc-version

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1107.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1107


commit 5cc55406f1a1c1b6b911de893c9b252685b36305
Author: Gwen Shapira 
Date:   2016-03-21T20:11:02Z

MINOR: fix documentation version




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-21 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3441:

Description: 
See here: 
https://github.com/apache/kafka/blob/trunk/docs/documentation.html

And here:
http://kafka.apache.org/0100/documentation.html

This should be fixed in both trunk and 0.10.0 branch

> 0.10.0 documentation still says "0.9.0"
> ---
>
> Key: KAFKA-3441
> URL: https://issues.apache.org/jira/browse/KAFKA-3441
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Priority: Blocker
>
> See here: 
> https://github.com/apache/kafka/blob/trunk/docs/documentation.html
> And here:
> http://kafka.apache.org/0100/documentation.html
> This should be fixed in both trunk and 0.10.0 branch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3441) 0.10.0 documentation still says "0.9.0"

2016-03-21 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3441:
---

 Summary: 0.10.0 documentation still says "0.9.0"
 Key: KAFKA-3441
 URL: https://issues.apache.org/jira/browse/KAFKA-3441
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1133

2016-03-21 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Add InterfaceStability.Unstable annotations to some Kafka 
Streams

--
[...truncated 1711 lines...]
jdk1.7.0_51/jre/lib/zi/Asia/Yakutsk
jdk1.7.0_51/jre/lib/zi/Asia/Dubai
jdk1.7.0_51/jre/lib/zi/Asia/Ulaanbaatar
jdk1.7.0_51/jre/lib/zi/Asia/Ust-Nera
jdk1.7.0_51/jre/lib/zi/Asia/Makassar
jdk1.7.0_51/jre/lib/zi/Asia/Damascus
jdk1.7.0_51/jre/lib/zi/Asia/Almaty
jdk1.7.0_51/jre/lib/zi/Asia/Kathmandu
jdk1.7.0_51/jre/lib/zi/Asia/Yekaterinburg
jdk1.7.0_51/jre/lib/zi/Asia/Irkutsk
jdk1.7.0_51/jre/lib/zi/Asia/Nicosia
jdk1.7.0_51/jre/lib/zi/Asia/Anadyr
jdk1.7.0_51/jre/lib/zi/Asia/Dili
jdk1.7.0_51/jre/lib/zi/Asia/Hebron
jdk1.7.0_51/jre/lib/zi/Asia/Muscat
jdk1.7.0_51/jre/lib/zi/Asia/Yerevan
jdk1.7.0_51/jre/lib/zi/Asia/Bishkek
jdk1.7.0_51/jre/lib/zi/Asia/Krasnoyarsk
jdk1.7.0_51/jre/lib/zi/Asia/Hovd
jdk1.7.0_51/jre/lib/zi/Asia/Manila
jdk1.7.0_51/jre/lib/zi/Asia/Rangoon
jdk1.7.0_51/jre/lib/zi/Asia/Jerusalem
jdk1.7.0_51/jre/lib/zi/Asia/Riyadh
jdk1.7.0_51/jre/lib/zi/Asia/Kamchatka
jdk1.7.0_51/jre/lib/zi/WET
jdk1.7.0_51/jre/lib/zi/PST8PDT
jdk1.7.0_51/jre/lib/images/
jdk1.7.0_51/jre/lib/images/icons/
jdk1.7.0_51/jre/lib/images/icons/sun-java_HighContrast.png
jdk1.7.0_51/jre/lib/images/icons/sun-java_HighContrastInverse.png
jdk1.7.0_51/jre/lib/images/icons/sun-java_LowContrast.png
jdk1.7.0_51/jre/lib/images/icons/sun-java.png
jdk1.7.0_51/jre/lib/images/cursors/
jdk1.7.0_51/jre/lib/images/cursors/motif_MoveDrop32x32.gif
jdk1.7.0_51/jre/lib/images/cursors/cursors.properties
jdk1.7.0_51/jre/lib/images/cursors/motif_MoveNoDrop32x32.gif
jdk1.7.0_51/jre/lib/images/cursors/motif_CopyNoDrop32x32.gif
jdk1.7.0_51/jre/lib/images/cursors/motif_LinkNoDrop32x32.gif
jdk1.7.0_51/jre/lib/images/cursors/motif_LinkDrop32x32.gif
jdk1.7.0_51/jre/lib/images/cursors/invalid32x32.gif
jdk1.7.0_51/jre/lib/images/cursors/motif_CopyDrop32x32.gif
jdk1.7.0_51/jre/lib/jfr.jar
jdk1.7.0_51/jre/lib/jvm.hprof.txt
jdk1.7.0_51/jre/lib/amd64/
jdk1.7.0_51/jre/lib/amd64/libjsoundalsa.so
jdk1.7.0_51/jre/lib/amd64/libkcms.so
jdk1.7.0_51/jre/lib/amd64/libjfxmedia.so
jdk1.7.0_51/jre/lib/amd64/libjpeg.so
jdk1.7.0_51/jre/lib/amd64/libnio.so
jdk1.7.0_51/jre/lib/amd64/libfontmanager.so
jdk1.7.0_51/jre/lib/amd64/libhprof.so
jdk1.7.0_51/jre/lib/amd64/libjsig.so
jdk1.7.0_51/jre/lib/amd64/libinstrument.so
jdk1.7.0_51/jre/lib/amd64/libnet.so
jdk1.7.0_51/jre/lib/amd64/libattach.so
jdk1.7.0_51/jre/lib/amd64/libsaproc.so
jdk1.7.0_51/jre/lib/amd64/jli/
jdk1.7.0_51/jre/lib/amd64/jli/libjli.so
jdk1.7.0_51/jre/lib/amd64/libjfxwebkit.so
jdk1.7.0_51/jre/lib/amd64/libnpt.so
jdk1.7.0_51/jre/lib/amd64/libjavafx-font.so
jdk1.7.0_51/jre/lib/amd64/libawt.so
jdk1.7.0_51/jre/lib/amd64/libprism-es2.so
jdk1.7.0_51/jre/lib/amd64/libsplashscreen.so
jdk1.7.0_51/jre/lib/amd64/libj2pcsc.so
jdk1.7.0_51/jre/lib/amd64/libmlib_image.so
jdk1.7.0_51/jre/lib/amd64/libj2pkcs11.so
jdk1.7.0_51/jre/lib/amd64/libsctp.so
jdk1.7.0_51/jre/lib/amd64/libdt_socket.so
jdk1.7.0_51/jre/lib/amd64/libjavafx-iio.so
jdk1.7.0_51/jre/lib/amd64/libjavaplugin_jni.so
jdk1.7.0_51/jre/lib/amd64/libgstplugins-lite.so
jdk1.7.0_51/jre/lib/amd64/libsunec.so
jdk1.7.0_51/jre/lib/amd64/libnpjp2.so
jdk1.7.0_51/jre/lib/amd64/libdeploy.so
jdk1.7.0_51/jre/lib/amd64/libjava_crw_demo.so
jdk1.7.0_51/jre/lib/amd64/libunpack.so
jdk1.7.0_51/jre/lib/amd64/libjfr.so
jdk1.7.0_51/jre/lib/amd64/libj2gss.so
jdk1.7.0_51/jre/lib/amd64/libt2k.so
jdk1.7.0_51/jre/lib/amd64/libverify.so
jdk1.7.0_51/jre/lib/amd64/libdcpr.so
jdk1.7.0_51/jre/lib/amd64/libjava.so
jdk1.7.0_51/jre/lib/amd64/libJdbcOdbc.so
jdk1.7.0_51/jre/lib/amd64/fxavcodecplugin-52.so
jdk1.7.0_51/jre/lib/amd64/server/
jdk1.7.0_51/jre/lib/amd64/server/libjsig.so
jdk1.7.0_51/jre/lib/amd64/server/Xusage.txt
jdk1.7.0_51/jre/lib/amd64/server/libjvm.so
jdk1.7.0_51/jre/lib/amd64/fxplugins.so
jdk1.7.0_51/jre/lib/amd64/xawt/
jdk1.7.0_51/jre/lib/amd64/xawt/libmawt.so
jdk1.7.0_51/jre/lib/amd64/fxavcodecplugin-53.so
jdk1.7.0_51/jre/lib/amd64/libjaas_unix.so
jdk1.7.0_51/jre/lib/amd64/libgstreamer-lite.so
jdk1.7.0_51/jre/lib/amd64/libmanagement.so
jdk1.7.0_51/jre/lib/amd64/jvm.cfg
jdk1.7.0_51/jre/lib/amd64/libjsdt.so
jdk1.7.0_51/jre/lib/amd64/libzip.so
jdk1.7.0_51/jre/lib/amd64/libglass.so
jdk1.7.0_51/jre/lib/amd64/headless/
jdk1.7.0_51/jre/lib/amd64/headless/libmawt.so
jdk1.7.0_51/jre/lib/amd64/libjsound.so
jdk1.7.0_51/jre/lib/amd64/libjdwp.so
jdk1.7.0_51/jre/lib/amd64/libjawt.so
jdk1.7.0_51/jre/lib/fontconfig.SuSE.10.bfc
jdk1.7.0_51/jre/lib/classlist
jdk1.7.0_51/jre/lib/management-agent.jar
jdk1.7.0_51/jre/lib/javaws.jar
jdk1.7.0_51/jre/lib/psfontj2d.properties
jdk1.7.0_51/jre/lib/rt.jar
jdk1.7.0_51/jre/lib/calendars.properties
jdk1.7.0_51/jre/lib/security/
jdk1.7.0_51/jre/lib/security/local_policy.jar
jdk1.7.0_51/jre/lib/security/trusted.libraries
jdk1.7.0_51/jre/lib/security/javafx.policy
jdk1.7.0_51/jre/lib/security/cacerts
jdk1.7.0_51/jre/lib/security/java.policy
jdk1.7.0_51/jre/l

Build failed in Jenkins: kafka-trunk-jdk8 #463

2016-03-21 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Add InterfaceStability.Unstable annotations to some Kafka 
Streams

--
[...truncated 1610 lines...]
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-io_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-uihandler_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-swing-tabcontrol_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-visual_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-loaders_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-search_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-favorites_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-progress-ui_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-masterfs-nio2_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-execution_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-keyring-fallback_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-lib-uihandler_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-text_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-execution_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-editor-mimelookup_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-output2_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-nodes_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-util-enumerations_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-progress_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-core-kit_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-awt_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-applemenu_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-autoupdate-ui_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-templates_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-util-enumerations_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-autoupdate-ui_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-editor-mimelookup_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-annotations-common_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-masterfs-nio2_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-multitabs_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-keyring-fallback_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-sendopts_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-applemenu_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-actions_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-ui_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-nodes_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-api-annotations-common_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-spi-quicksearch_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-compat_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-autoupdate-services_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-queries_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-text_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-core-windows_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-sampler_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-loaders_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-print_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-swing-plaf_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-editor-mimelookup-impl_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-javahelp_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-openide-dialogs_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-editor-mimelookup-impl_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-javahelp_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-keyring-impl_ja.jar
jdk1.8.0_45/lib/visualvm/platform/modules/locale/org-netbeans-modules-settings_zh_CN.jar
jdk1.8.0_45/lib/visualvm/platform

[GitHub] kafka pull request: MINOR: Add InterfaceStability.Unstable annotat...

2016-03-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1097


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3440) Add Javadoc for KTable (changelog stream) and KStream (record stream)

2016-03-21 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3440:


 Summary: Add Javadoc for KTable (changelog stream) and KStream 
(record stream)
 Key: KAFKA-3440
 URL: https://issues.apache.org/jira/browse/KAFKA-3440
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
 Fix For: 0.10.0.1


Currently we only have a 1-liner in {code}KTable{code} and {code}KStream{code} 
class describing the changelog and record streams. We'd better have a more 
detailed explanation as in the web docs in Javadocs as well.

Also we want to have some more description in windowed {code}KTable{code}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3439) Document possible exception thrown in public APIs

2016-03-21 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3439:


 Summary: Document possible exception thrown in public APIs
 Key: KAFKA-3439
 URL: https://issues.apache.org/jira/browse/KAFKA-3439
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
 Fix For: 0.10.0.1


Candidate interfaces include all the ones in "kstream", "processor" and "state" 
packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3437) We don't need sitedocs package for every scala version

2016-03-21 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204906#comment-15204906
 ] 

Gwen Shapira commented on KAFKA-3437:
-

also, it looks like "releaseTarGzAll" generates the javadoc directory, but not 
scaladoc?

> We don't need sitedocs package for every scala version
> --
>
> Key: KAFKA-3437
> URL: https://issues.apache.org/jira/browse/KAFKA-3437
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Priority: Minor
>
> When running "./gradlew releaseTarGzAll - it generates a binary tarball for 
> every scala version we support (good!) and also sitedoc tarball for every 
> scala version we support (useless).
> Will be nice if we have a way to generate just one sitedoc tarball for our 
> release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3388) Producer should only timeout a batch in the accumulator when metadata is missing.

2016-03-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3388:

Priority: Critical  (was: Blocker)

> Producer should only timeout a batch in the accumulator when metadata is 
> missing.
> -
>
> Key: KAFKA-3388
> URL: https://issues.apache.org/jira/browse/KAFKA-3388
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> In KIP-19 we are reusing the request.timeout.ms to timeout the batches in the 
> accumulator. We were intended to avoid the case that the batches sitting in 
> the accumulator forever when topic metadata is missing.
> Currently we are not checking if metadata is available or not when we timeout 
> the batches in the accumulator (although the comments says we will check the 
> metadata). This causes problem that once the previous batch hit a request 
> timeout and got retried, all the subsequent batches will fail with timeout 
> exception. We should only timeout the batches in the accumulator when the 
> metadata of the partition is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3438) Rack Aware Replica Reassignment should warn of overloaded brokers

2016-03-21 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-3438:
---

 Summary: Rack Aware Replica Reassignment should warn of overloaded 
brokers
 Key: KAFKA-3438
 URL: https://issues.apache.org/jira/browse/KAFKA-3438
 Project: Kafka
  Issue Type: Improvement
Reporter: Ben Stopford


We've changed the replica reassignment code to be rack aware.

One problem that might catch users out would be that they rebalance the cluster 
using kafka-reassign-partitions.sh but their rack configuration means that some 
high proportion of replicas are pushed onto a single, or small number of, 
brokers. 

This should be an easy problem to avoid, by changing the rack assignment 
information, but we should probably warn users if they are going to create 
something that is unbalanced. 

So imagine I have a Kafka cluster of 12 nodes spread over two racks with rack 
awareness enabled. If I add a 13th machine, on a new rack, and run the 
rebalance tool, that new machine will get ~6x as many replicas as the least 
loaded broker. 

Suggest a warning  be added to the tool output when --generate is called. "The 
most loaded broker has 2.3x as many replicas as the the least loaded broker. 
This is likely due to an uneven distribution of brokers across racks. You're 
advised to alter the rack config so there are approximately the same number of 
brokers per rack" and displays the individual rack→#brokers and 
broker→#replicas data for the proposed move.  






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3412) Multiple commitAsync() calls causes SendFailedException in commit callback

2016-03-21 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3412 started by Jason Gustafson.
--
> Multiple commitAsync() calls causes SendFailedException in commit callback
> --
>
> Key: KAFKA-3412
> URL: https://issues.apache.org/jira/browse/KAFKA-3412
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> If the user calls commitAsync() multiple times between poll() calls, some of 
> them will succeed, but many will be rejected with SendFailedException. This 
> is basically the result of NetworkClient only accepting one request to be 
> sent at a time and the higher level ConsumerNetworkClient not retrying after 
> send failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3437) We don't need sitedocs package for every scala version

2016-03-21 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3437:
---

 Summary: We don't need sitedocs package for every scala version
 Key: KAFKA-3437
 URL: https://issues.apache.org/jira/browse/KAFKA-3437
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Priority: Minor


When running "./gradlew releaseTarGzAll - it generates a binary tarball for 
every scala version we support (good!) and also sitedoc tarball for every scala 
version we support (useless).

Will be nice if we have a way to generate just one sitedoc tarball for our 
release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3319) Improve session timeout broker and client configuration documentation

2016-03-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204833#comment-15204833
 ] 

ASF GitHub Bot commented on KAFKA-3319:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1106

KAFKA-3319: improve session timeout broker/client config documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3319

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1106.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1106


commit fd67e0929ee00065446e7d57c80f8e54d801bfc5
Author: Jason Gustafson 
Date:   2016-03-21T18:32:03Z

KAFKA-3319: improve session timeout broker/client config documentation




> Improve session timeout broker and client configuration documentation
> -
>
> Key: KAFKA-3319
> URL: https://issues.apache.org/jira/browse/KAFKA-3319
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The current descriptions of the consumer's session timeout and the broker's 
> group min and max session timeouts are very matter-of-fact: they define 
> exactly what the configuration is and nothing else. We should provide more 
> detail about why these settings exist and why a user might need to change 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3319: improve session timeout broker/cli...

2016-03-21 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1106

KAFKA-3319: improve session timeout broker/client config documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3319

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1106.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1106


commit fd67e0929ee00065446e7d57c80f8e54d801bfc5
Author: Jason Gustafson 
Date:   2016-03-21T18:32:03Z

KAFKA-3319: improve session timeout broker/client config documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3319) Improve session timeout broker and client configuration documentation

2016-03-21 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3319:
---
Description: The current descriptions of the consumer's session timeout and 
the broker's group min and max session timeouts are very matter-of-fact: they 
define exactly what the configuration is and nothing else. We should provide 
more detail about why these settings exist and why a user might need to change 
them.  (was: The current descriptions of the group min and max session timeouts 
are very matter-of-fact: they define exactly what the configuration is and 
nothing else. We should provide more detail about why these settings exist and 
why a user might need to change them.)

> Improve session timeout broker and client configuration documentation
> -
>
> Key: KAFKA-3319
> URL: https://issues.apache.org/jira/browse/KAFKA-3319
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The current descriptions of the consumer's session timeout and the broker's 
> group min and max session timeouts are very matter-of-fact: they define 
> exactly what the configuration is and nothing else. We should provide more 
> detail about why these settings exist and why a user might need to change 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3319) Improve session timeout broker and client configuration documentation

2016-03-21 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3319:
---
Summary: Improve session timeout broker and client configuration 
documentation  (was: Improve description of group min and max session timeouts)

> Improve session timeout broker and client configuration documentation
> -
>
> Key: KAFKA-3319
> URL: https://issues.apache.org/jira/browse/KAFKA-3319
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The current descriptions of the group min and max session timeouts are very 
> matter-of-fact: they define exactly what the configuration is and nothing 
> else. We should provide more detail about why these settings exist and why a 
> user might need to change them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3436) Speed up controlled shutdown.

2016-03-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3436:

Description: 
Currently rolling bounce a Kafka cluster with tens of thousands of partitions 
can take very long (~2 min for each broker with ~5000 partitions/broker in our 
environment). The majority of the time is spent on shutting down a broker. The 
time of shutting down a broker usually  includes the following parts:

T1: During controlled shutdown, people usually want to make sure there is no 
under replicated partitions. So shutting down a broker during a rolling bounce 
will have to wait for the previous restarted broker to catch up. This is T1.

T2: The time to send controlled shutdown request and receive controlled 
shutdown response. Currently the a controlled shutdown request will trigger 
many LeaderAndIsrRequest and UpdateMetadataRequest. And also involving many 
zookeeper update in serial.

T3: The actual time to shutdown all the components. It is usually small 
compared with T1 and T2.

T1 is related to:
A) the inbound throughput on the cluster, and 
B) the "down" time of the broker (time between replica fetchers stop and 
replica fetchers restart)
The larger the traffic is, or the longer the broker stopped fetching, the 
longer it will take for the broker to catch up and get back into ISR. Therefore 
the longer T1 will be. Assume:
* the in bound network traffic is X bytes/second on a broker
* the time T1.B ("down" time) mentioned above is T
Theoretically it will take (X * T) / (NetworkBandwidth - X) = 
InBoundNetworkUtilization * T / (1 - InboundNetworkUtilization) for a the 
broker to catch up after the restart. While X is out of our control, T is 
largely related to T2.

The purpose of this ticket is to reduce T2 by:
1. Batching the LeaderAndIsrRequest and UpdateMetadataRequest during controlled 
shutdown.
2. Use async zookeeper write to pipeline zookeeper writes. According to 
Zookeeper wiki(https://wiki.apache.org/hadoop/ZooKeeper/Performance), a 3 node 
ZK cluster should be able to handle 20K writes (1K size). So if we use async 
write, likely we will be able to reduce zookeeper update time to lower seconds 
or even sub-second level.


  was:
Currently rolling bounce a Kafka cluster with tens of thousands of partitions 
can take very long (~2 min for each broker with ~5000 partitions/broker in our 
environment). The time of shutting down a broker usually  includes the 
following parts:

T1: During controlled shutdown, people usually want to make sure there is no 
under replicated partitions. So shutting down a broker during a rolling bounce 
will have to wait for the previous restarted broker to catch up. This is T1.

T2: The time to send controlled shutdown request and receive controlled 
shutdown response. Currently the a controlled shutdown request will trigger 
many LeaderAndIsrRequest and UpdateMetadataRequest. And also involving many 
zookeeper update in serial.

T3: The actual time to shutdown all the components. It is usually small 
compared with T1 and T2.

T1 is related to:
A) the inbound throughput on the cluster, and 
B) the "down" time of the broker (time between replica fetchers stop and 
replica fetchers restart)
The larger the traffic is, or the longer the broker stopped fetching, the 
longer it will take for the broker to catch up and get back into ISR. Therefore 
the longer T1 will be. Assume:
* the in bound network traffic is X bytes/second on a broker
* the time T1.B ("down" time) mentioned above is T
Theoretically it will take (X * T) / (NetworkBandwidth - X) = 
InBoundNetworkUtilization * T / (1 - InboundNetworkUtilization) for a the 
broker to catch up after the restart. While X is out of our control, T is 
largely related to T2.

The purpose of this ticket is to reduce T2 by:
1. Batching the LeaderAndIsrRequest and UpdateMetadataRequest during controlled 
shutdown.
2. Use async zookeeper write to pipeline zookeeper writes. According to 
Zookeeper wiki(https://wiki.apache.org/hadoop/ZooKeeper/Performance), a 3 node 
ZK cluster should be able to handle 20K writes (1K size). So if we use async 
write, likely we will be able to reduce zookeeper update time to lower seconds 
or even sub-second level.



> Speed up controlled shutdown.
> -
>
> Key: KAFKA-3436
> URL: https://issues.apache.org/jira/browse/KAFKA-3436
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> Currently rolling bounce a Kafka cluster with tens of thousands of partitions 
> can take very long (~2 min for each broker with ~5000 partitions/broker in 
> our environment). The majority of the time is spent on shutting down a 
> broker. The time of shutting down a broker usually  includes the followi

[jira] [Created] (KAFKA-3436) Speed up controlled shutdown.

2016-03-21 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-3436:
---

 Summary: Speed up controlled shutdown.
 Key: KAFKA-3436
 URL: https://issues.apache.org/jira/browse/KAFKA-3436
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Fix For: 0.10.1.0


Currently rolling bounce a Kafka cluster with tens of thousands of partitions 
can take very long (~2 min for each broker with ~5000 partitions/broker in our 
environment). The time of shutting down a broker usually  includes the 
following parts:

T1: During controlled shutdown, people usually want to make sure there is no 
under replicated partitions. So shutting down a broker during a rolling bounce 
will have to wait for the previous restarted broker to catch up. This is T1.

T2: The time to send controlled shutdown request and receive controlled 
shutdown response. Currently the a controlled shutdown request will trigger 
many LeaderAndIsrRequest and UpdateMetadataRequest. And also involving many 
zookeeper update in serial.

T3: The actual time to shutdown all the components. It is usually small 
compared with T1 and T2.

T1 is related to:
A) the inbound throughput on the cluster, and 
B) the "down" time of the broker (time between replica fetchers stop and 
replica fetchers restart)
The larger the traffic is, or the longer the broker stopped fetching, the 
longer it will take for the broker to catch up and get back into ISR. Therefore 
the longer T1 will be. Assume:
* the in bound network traffic is X bytes/second on a broker
* the time T1.B ("down" time) mentioned above is T
Theoretically it will take (X * T) / (NetworkBandwidth - X) = 
InBoundNetworkUtilization * T / (1 - InboundNetworkUtilization) for a the 
broker to catch up after the restart. While X is out of our control, T is 
largely related to T2.

The purpose of this ticket is to reduce T2 by:
1. Batching the LeaderAndIsrRequest and UpdateMetadataRequest during controlled 
shutdown.
2. Use async zookeeper write to pipeline zookeeper writes. According to 
Zookeeper wiki(https://wiki.apache.org/hadoop/ZooKeeper/Performance), a 3 node 
ZK cluster should be able to handle 20K writes (1K size). So if we use async 
write, likely we will be able to reduce zookeeper update time to lower seconds 
or even sub-second level.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1132

2016-03-21 Thread Apache Jenkins Server
See 

Changes:

[cshapi] Changing version to 0.10.1.0-SNAPSHOT

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 702d560c555bf7121eca02010adedd4986e36b87 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 702d560c555bf7121eca02010adedd4986e36b87
 > git rev-list 4f048c4f194a90ded5f0df35e4e23379272d5bc6 # timeout=10
Setting GRADLE_2_4_RC_2_HOME=
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5183217228617084612.sh
+ /bin/gradle
/tmp/hudson5183217228617084612.sh: line 2: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
Setting GRADLE_2_4_RC_2_HOME=
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 3 days 16 hr old

Setting GRADLE_2_4_RC_2_HOME=
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Created] (KAFKA-3435) Remove `Unstable` annotation from new Java Consumer

2016-03-21 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3435:
--

 Summary: Remove `Unstable` annotation from new Java Consumer
 Key: KAFKA-3435
 URL: https://issues.apache.org/jira/browse/KAFKA-3435
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
 Fix For: 0.10.0.0


As part of the vote for "KIP-45 - Standardize all client sequence interaction 
on j.u.Collection", the underlying assumption is that we won't break things 
going forward. We should remove the `Unstable` annotation to make that clear.

cc [~hachikuji]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #462

2016-03-21 Thread Apache Jenkins Server
See 

Changes:

[cshapi] Changing version to 0.10.1.0-SNAPSHOT

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 702d560c555bf7121eca02010adedd4986e36b87 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 702d560c555bf7121eca02010adedd4986e36b87
 > git rev-list 4f048c4f194a90ded5f0df35e4e23379272d5bc6 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting GRADLE_2_4_RC_2_HOME=
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson9124913373578182087.sh
+ /bin/gradle
/tmp/hudson9124913373578182087.sh: line 2: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting GRADLE_2_4_RC_2_HOME=
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 2 days 13 hr old

Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting GRADLE_2_4_RC_2_HOME=


[GitHub] kafka pull request: MINOR: add back old constructor of ConsumerRec...

2016-03-21 Thread hachikuji
Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/1027


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2670) add sampling rate to MirrorMaker

2016-03-21 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204620#comment-15204620
 ] 

Dustin Cote commented on KAFKA-2670:


I agree with Joel, it's probably better to keep the MirrorMaker simple.  
Especially because the only way I can actually see to do this is to add another 
branch statement into the hot code path for processing events, which marginally 
slows the MirrorMaker.  I'm going to checkout that interceptor API in some more 
detail and maybe it makes sense to add an example that would accomplish this 
same thing.  For example, intercept every X number of events and /dev/null them 
instead of letting them make it to the target cluster.  When I have a better 
handle on that I will make a pull request.

> add sampling rate to MirrorMaker
> 
>
> Key: KAFKA-2670
> URL: https://issues.apache.org/jira/browse/KAFKA-2670
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Reporter: Christian Tramnitz
>Priority: Minor
>
> MirrorMaker could be used to copy data to different Kafka instances in 
> different environments (i.e. from production to development or test), but 
> often these are not at the same scale as production. A sampling rate could be 
> introduced to MirrorMaker to define a ratio of data to copied (per topic) to 
> downstream instances. Of course this should be 1:1 (or 100%) per default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-03-21 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3434:
---
Fix Version/s: 0.10.0.0

> Add old ConsumerRecord constructor for compatibility
> 
>
> Key: KAFKA-3434
> URL: https://issues.apache.org/jira/browse/KAFKA-3434
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> After KIP-42, several new fields have been added to ConsumerRecord, all of 
> which are passed through the only constructor. It would be nice to add back 
> the old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3434) Add old ConsumerRecord constructor for compatibility

2016-03-21 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-3434:
--

 Summary: Add old ConsumerRecord constructor for compatibility
 Key: KAFKA-3434
 URL: https://issues.apache.org/jira/browse/KAFKA-3434
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.10.0.0
Reporter: Jason Gustafson
Assignee: Jason Gustafson


After KIP-42, several new fields have been added to ConsumerRecord, all of 
which are passed through the only constructor. It would be nice to add back the 
old constructor for compatibility and convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


New release branch 0.10.0

2016-03-21 Thread Gwen Shapira
Hello Kafka developers and friends,

As promised, we now have a release branch for 0.10.0 release (with 0.10.0.0
as the version).
Trunk has been bumped to 0.10.1.0-SNAPSHOT.

I'll be going over the JIRAs to move every non-blocker from this release to
the next release.

>From this point, most changes should go to trunk.
*Blockers (existing and new that we discover while testing the release)
will be double-committed. *Please discuss with your reviewer whether your
PR should go to trunk or to trunk+release so they can merge accordingly.

*Please help us test the release! *
Things you can focus on:
* Upgrades! Does a rolling upgrade (from 0.9.0.1 or 0.8.2.2) work as
intended?
* Compatibility! Do the old clients work as expected?
* Interceptors! Its a new feature, try to use it and tell us how it goes.
* Purging on replicas and mirrors created from scratch - with the new
timestamp field old data in new replicas should be cleaned at the same time
of the original (and not relative to the time the data was replicated) -
does it actually work.
* Cleaning to specific timestamp - new feature! does it work as intended?
* Documentation: Does it still work? are we missing anything?

Thanks!

Gwen


[jira] [Updated] (KAFKA-1342) Slow controlled shutdowns can result in stale shutdown requests

2016-03-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1342:

Assignee: Joel Koshy  (was: Jiangjie Qin)

> Slow controlled shutdowns can result in stale shutdown requests
> ---
>
> Key: KAFKA-1342
> URL: https://issues.apache.org/jira/browse/KAFKA-1342
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Joel Koshy
>Assignee: Joel Koshy
>Priority: Blocker
>  Labels: newbie++, newbiee
> Fix For: 0.10.1.0
>
>
> I don't think this is a bug introduced in 0.8.1., but triggered by the fact
> that controlled shutdown seems to have become slower in 0.8.1 (will file a
> separate ticket to investigate that). When doing a rolling bounce, it is
> possible for a bounced broker to stop all its replica fetchers since the
> previous PID's shutdown requests are still being shutdown.
> - 515 is the controller
> - Controlled shutdown initiated for 503
> - Controller starts controlled shutdown for 503
> - The controlled shutdown takes a long time in moving leaders and moving
>   follower replicas on 503 to the offline state.
> - So 503's read from the shutdown channel times out and a new channel is
>   created. It issues another shutdown request.  This request (since it is a
>   new channel) is accepted at the controller's socket server but then waits
>   on the broker shutdown lock held by the previous controlled shutdown which
>   is still in progress.
> - The above step repeats for the remaining retries (six more requests).
> - 503 hits SocketTimeout exception on reading the response of the last
>   shutdown request and proceeds to do an unclean shutdown.
> - The controller's onBrokerFailure call-back fires and moves 503's replicas
>   to offline (not too important in this sequence).
> - 503 is brought back up.
> - The controller's onBrokerStartup call-back fires and moves its replicas
>   (and partitions) to online state. 503 starts its replica fetchers.
> - Unfortunately, the (phantom) shutdown requests are still being handled and
>   the controller sends StopReplica requests to 503.
> - The first shutdown request finally finishes (after 76 minutes in my case!).
> - The remaining shutdown requests also execute and do the same thing (sends
>   StopReplica requests for all partitions to
>   503).
> - The remaining requests complete quickly because they end up not having to
>   touch zookeeper paths - no leaders left on the broker and no need to
>   shrink ISR in zookeeper since it has already been done by the first
>   shutdown request.
> - So in the end-state 503 is up, but effectively idle due to the previous
>   PID's shutdown requests.
> There are some obvious fixes that can be made to controlled shutdown to help
> address the above issue. E.g., we don't really need to move follower
> partitions to Offline. We did that as an "optimization" so the broker falls
> out of ISR sooner - which is helpful when producers set required.acks to -1.
> However it adds a lot of latency to controlled shutdown. Also, (more
> importantly) we should have a mechanism to abort any stale shutdown process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3400) Topic stop working / can't describe topic

2016-03-21 Thread Tobias (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204540#comment-15204540
 ] 

Tobias commented on KAFKA-3400:
---

I think the issue I've hit is
http://search-hadoop.com/m/uyzND1SNAr22xymh92&subj=Returning+an+error+code+to+the+producer+when+it+tries+writing+to+topic+being+deleted

I will monitor to see if a clean delete has fixed our issues

> Topic stop working / can't describe topic
> -
>
> Key: KAFKA-3400
> URL: https://issues.apache.org/jira/browse/KAFKA-3400
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Tobias
>Assignee: Ashish K Singh
> Fix For: 0.10.0.0
>
>
> we are seeing an issue were we intermittently (every couple of hours) get and 
> error with certain topics. They stop working and producers give a 
> LeaderNotFoundException.
> When we then try to use kafka-topics.sh to describe the topic we get the 
> error below.
> Error while executing topic command : next on empty iterator
> {{
> [2016-03-15 17:30:26,231] ERROR java.util.NoSuchElementException: next on 
> empty iterator
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)
>   at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)
>   at scala.collection.IterableLike$class.head(IterableLike.scala:91)
>   at scala.collection.AbstractIterable.head(Iterable.scala:54)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:198)
>   at 
> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:188)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:188)
>   at kafka.admin.TopicCommand$.main(TopicCommand.scala:66)
>   at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> }}
> if we delete the topic, then it will start to work again for a while
> We can't see anything obvious in the logs but are happy to provide if needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1313) Support adding replicas to existing topic partitions

2016-03-21 Thread Bingkun Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204488#comment-15204488
 ] 

Bingkun Guo commented on KAFKA-1313:


+1 on supporting "replication-factor" option in "alter".

> Support adding replicas to existing topic partitions
> 
>
> Key: KAFKA-1313
> URL: https://issues.apache.org/jira/browse/KAFKA-1313
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.8.0
>Reporter: Marc Labbe
>Assignee: Geoff Anderson
>Priority: Critical
>  Labels: newbie++
> Fix For: 0.10.1.0
>
>
> There is currently no easy way to add replicas to an existing topic 
> partitions.
> For example, topic create-test has been created with ReplicationFactor=1: 
> Topic:create-test  PartitionCount:3ReplicationFactor:1 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1 Isr: 1
> Topic: create-test Partition: 1Leader: 2   Replicas: 2 Isr: 2
> Topic: create-test Partition: 2Leader: 3   Replicas: 3 Isr: 3
> I would like to increase the ReplicationFactor=2 (or more) so it shows up 
> like this instead.
> Topic:create-test  PartitionCount:3ReplicationFactor:2 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1,2 Isr: 1,2
> Topic: create-test Partition: 1Leader: 2   Replicas: 2,3 Isr: 2,3
> Topic: create-test Partition: 2Leader: 3   Replicas: 3,1 Isr: 3,1
> Use cases for this:
> - adding brokers and thus increase fault tolerance
> - fixing human errors for topics created with wrong values



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204370#comment-15204370
 ] 

Ismael Juma commented on KAFKA-3409:


I am bumping the priority and setting the fix version to 0.10.0.0 because "MM 
hangs indefinitely" sounds bad. [~hachikuji], let me know if you disagree.

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3409:
---
Priority: Critical  (was: Major)

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
>Priority: Critical
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3409) Mirror maker hangs indefinitely due to commit

2016-03-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3409:
---
Fix Version/s: 0.10.0.0

> Mirror maker hangs indefinitely due to commit 
> --
>
> Key: KAFKA-3409
> URL: https://issues.apache.org/jira/browse/KAFKA-3409
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.1
> Environment: Kafka 0.9.0.1
>Reporter: TAO XIAO
> Fix For: 0.10.0.0
>
>
> Mirror maker hangs indefinitely upon receiving CommitFailedException. I 
> believe this is due to CommitFailedException not caught by mirror maker and 
> mirror maker has no way to recover from it.
> A better approach will be catching the exception and rejoin the group. Here 
> is the stack trace
> [2016-03-15 09:34:36,463] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2016-03-15 09:34:36,463] FATAL [mirrormaker-thread-3] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed due to group rebalance
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:358)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:968)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:548)
> at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:340)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.maybeFlushAndCommitOffsets(MirrorMaker.scala:438)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:399)
> [2016-03-15 09:34:36,463] INFO [mirrormaker-thread-3] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,464] INFO [mirrormaker-thread-3] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2016-03-15 09:34:36,477] ERROR Error UNKNOWN_MEMBER_ID occurred while 
> committing offsets for group x 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3433) Document stability annotations

2016-03-21 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3433:
--

 Summary: Document stability annotations
 Key: KAFKA-3433
 URL: https://issues.apache.org/jira/browse/KAFKA-3433
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Grant Henke


We should decide what the various stability (InterfaceStability) annotations 
mean to the user and document them both in code and on the official site docs. 

This came up as a part of the 
[KIP-45|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61337336]
 discussions 
([link|http://search-hadoop.com/m/uyzND18BCOU1SDHL92&subj=Re+VOTE+KIP+45+Standardize+KafkaConsumer+API+to+use+Collection])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


request for edit access to Kafka wiki

2016-03-21 Thread Ganesh Nikam
Hi All,



I want to publish C++ Kafka client. I have my git repository ready. Now I
want add entry on Kafka “Clients” page (Confluence wiki page) for this new
client.

I did create my login for the Confluence and login with that. But I am not
able to edit the page. Do I require to do some other steps to get the write
access ?



If you can give me the write access then that will be very helpful. Here is
my Confluence user name:

User name : ganesh.nikam





Regards

Ganesh Nikam


[jira] [Updated] (KAFKA-3203) Add UnknownCodecException and UnknownMagicByteException to error mapping

2016-03-21 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3203:
---
Fix Version/s: (was: 0.10.0.0)

> Add UnknownCodecException and UnknownMagicByteException to error mapping
> 
>
> Key: KAFKA-3203
> URL: https://issues.apache.org/jira/browse/KAFKA-3203
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> Currently most of the exceptions to user have an error code. While 
> UnknownCodecException and UnknownMagicByteException can also be thrown to 
> client, broker does not have error mapping for them, so clients will only 
> receive UnknownServerException, which is vague.
> We should create those two exceptions in client package and add them to error 
> mapping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >