[jira] [Updated] (KAFKA-5545) Kafka Stream not able to successfully restart over new broker ip

2017-09-05 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-5545:
--
Fix Version/s: (was: 0.11.0.1)
   (was: 1.0.0)

> Kafka Stream not able to successfully restart over new broker ip
> 
>
> Key: KAFKA-5545
> URL: https://issues.apache.org/jira/browse/KAFKA-5545
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: kafkastreams.log
>
>
> Hi
> I have one kafka broker and one kafka stream application
> initially kafka stream connected and starts processing data. Then i restart 
> the broker. When broker restarts new ip will be assigned.
> In kafka stream i have a 5min interval thread which checks if broker ip 
> changed and if changed, we cleanup the stream, rebuild topology(tried with 
> reusing topology) and start the stream again. I end up with the following 
> exceptions.
> 11:04:08.032 [StreamThread-38] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-38] Creating active task 0_5 with assigned 
> partitions [PR-5]
> 11:04:08.033 [StreamThread-41] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-41] Creating active task 0_1 with assigned 
> partitions [PR-1]
> 11:04:08.036 [StreamThread-34] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-34] Creating active task 0_7 with assigned 
> partitions [PR-7]
> 11:04:08.036 [StreamThread-37] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-37] Creating active task 0_3 with assigned 
> partitions [PR-3]
> 11:04:08.036 [StreamThread-45] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-45] Creating active task 0_0 with assigned 
> partitions [PR-0]
> 11:04:08.037 [StreamThread-36] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-36] Creating active task 0_4 with assigned 
> partitions [PR-4]
> 11:04:08.037 [StreamThread-43] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-43] Creating active task 0_6 with assigned 
> partitions [PR-6]
> 11:04:08.038 [StreamThread-48] INFO  o.a.k.s.p.internals.StreamThread - 
> stream-thread [StreamThread-48] Creating active task 0_2 with assigned 
> partitions [PR-2]
> 11:04:09.034 [StreamThread-38] WARN  o.a.k.s.p.internals.StreamThread - Could 
> not create task 0_5. Will retry.
> org.apache.kafka.streams.errors.LockException: task [0_5] Failed to lock the 
> state directory for task 0_5
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:100)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:108)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.Consume

[jira] [Updated] (KAFKA-5440) Kafka Streams report state RUNNING even if all threads are dead

2017-09-05 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-5440:
--
Fix Version/s: (was: 0.11.0.1)
   (was: 1.0.0)

> Kafka Streams report state RUNNING even if all threads are dead
> ---
>
> Key: KAFKA-5440
> URL: https://issues.apache.org/jira/browse/KAFKA-5440
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1, 0.11.0.0
>Reporter: Matthias J. Sax
>
> From the mailing list:
> {quote}
> Hi All,
> We recently implemented a health check for a Kafka Streams based application. 
> The health check is simply checking the state of Kafka Streams by calling 
> KafkaStreams.state(). It reports healthy if it’s not in PENDING_SHUTDOWN or 
> NOT_RUNNING states. 
> We truly appreciate having the possibility to easily check the state of Kafka 
> Streams but to our surprise we noticed that KafkaStreams.state() returns 
> RUNNING even though all StreamThreads has crashed and reached NOT_RUNNING 
> state. Is this intended behaviour or is it a bug? Semantically it seems weird 
> to me that KafkaStreams would say it’s RUNNING when it is in fact not 
> consuming anything since all underlying working threads has crashed. 
> If this is intended behaviour I would appreciate an explanation of why that 
> is the case. Also in that case, how could I determine if the consumption from 
> Kafka hasn’t crashed? 
> If this is not intended behaviour, how fast could I expect it to be fixed? I 
> wouldn’t mind fixing it myself but I’m not sure if this is considered trivial 
> or big enough to require a JIRA. Also, if I would implement a fix I’d like 
> your input on what would be a reasonable solution. By just inspecting to code 
> I have an idea but I’m not sure I understand all the implication so I’d be 
> happy to hear your thoughts first. 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5835) CommitFailedException message is misleading and cause is swallowed

2017-09-05 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-5835:
---

 Summary: CommitFailedException message is misleading and cause is 
swallowed
 Key: KAFKA-5835
 URL: https://issues.apache.org/jira/browse/KAFKA-5835
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
Reporter: Stevo Slavic
Priority: Trivial


{{CommitFailedException}}'s message suggests that it can only be thrown as 
consequence of rebalancing. JavaDoc of the {{CommitFailedException}} suggests 
differently that in general it can be thrown for any kind of unrecoverable 
failure from {{KafkaConsumer#commitSync()}} call (e.g. if offset being 
committed is invalid / outside of range).

{{CommitFailedException}}'s message is misleading in a way that one can just 
see the message in logs, and without consulting JavaDoc or source code one can 
assume that message is correct and that rebalancing is the only potential 
cause, so one can wast time proceeding with the debugging in wrong direction.

Additionally, since {{CommitFailedException}} can be thrown for different 
reasons, cause should not be swallowed. This makes impossible to handle each 
potential cause in a specific way. If the cause is another exception please 
pass it as cause, or construct appropriate exception hierarchy with specific 
exception for every failure cause and make {{CommitFailedException}} abstract.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4366) KafkaStreams.close() blocks indefinitely

2017-09-05 Thread Seweryn Habdank-Wojewodzki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153390#comment-16153390
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-4366:
---

Questions: 

* If I will use close(...) with timeout, could it be the case, that I am losing 
messages?
* What exactly is happening, when timeout is reached?
* What are the consequences of calling close(...) with timeout?
* I had also problems with hanging close(), which in background is calling 
close(0), but why it can hangs?

Thanks in advance for ansers :-).

> KafkaStreams.close() blocks indefinitely
> 
>
> Key: KAFKA-4366
> URL: https://issues.apache.org/jira/browse/KAFKA-4366
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1, 0.10.1.0
>Reporter: Michal Borowiecki
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
> Attachments: threadDump.log
>
>
> KafkaStreams.close() method calls join on all its threads without a timeout, 
> meaning indefinitely, which makes it prone to deadlocks and unfit to be used 
> in shutdown hooks.
> (KafkaStreams::close is used in numerous examples by confluent: 
> https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-cp-3.0.1/kafka-streams/src/main/java/io/confluent/examples/streams
>  and 
> https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
>  so we assumed it to be recommended practice)
> A deadlock happens, for instance, if System.exit() is called from within the 
> uncaughtExceptionHandler. (We need to call System.exit() from the 
> uncaughtExceptionHandler because KAFKA-4355 issue shuts down the StreamThread 
> and to recover we want the process to exit, as our infrastructure will then 
> start it up again.)
> The System.exit call (from the uncaughtExceptionHandler, which runs in the 
> StreamThread) will execute the shutdown hook in a new thread and wait for 
> that thread to join. If the shutdown hook calls KafkaStreams.close, it will 
> in turn block waiting for the StreamThread to join, hence the deadlock.
> Runtime.addShutdownHook javadocs state:
> {quote}
> Shutdown hooks run at a delicate time in the life cycle of a virtual machine 
> and should therefore be coded defensively. They should, in particular, be 
> written to be thread-safe and to avoid deadlocks insofar as possible
> {quote}
> and
> {quote}
> Shutdown hooks should also finish their work quickly.
> {quote}
> Therefore the current implementation of KafkaStreams.close() which waits 
> forever for threads to join is completely unsuitable for use in a shutdown 
> hook. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5836) Kafka Streams - API for specifying internal stream name on join

2017-09-05 Thread JIRA
Lovro Pandžić created KAFKA-5836:


 Summary: Kafka Streams - API for specifying internal stream name 
on join
 Key: KAFKA-5836
 URL: https://issues.apache.org/jira/browse/KAFKA-5836
 Project: Kafka
  Issue Type: New Feature
Reporter: Lovro Pandžić


Automatic topic name can be problematic in case of streams operation 
change/migration.
I'd like to be able to specify name of an internal topic so I can avoid 
creation of new stream and data "loss" when changing the Stream building.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5837) ReassignPartitionsCommand fails if default throttle/timeout used

2017-09-05 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-5837:
-

 Summary: ReassignPartitionsCommand fails if default 
throttle/timeout used
 Key: KAFKA-5837
 URL: https://issues.apache.org/jira/browse/KAFKA-5837
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 1.0.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 1.0.0


System ReassignPartitionsTest.test_reassign_partitions failed with:

{quote}
Partitions reassignment failed due to java.lang.String cannot be cast to 
java.lang.Long
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Long
  at scala.runtime.BoxesRunTime.unboxToLong(BoxesRunTime.java:105)
  at 
kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:188)
  at 
kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:61)
  at kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
{quote}

This is because default throttle is being set as a String rather than a Long. 
Default throttle was never being set properly in the command opts, but it 
didn't matter earlier because code used to set it explicitly:
{quote}
val throttle = if (opts.options.has(opts.throttleOpt)) 
opts.options.valueOf(opts.throttleOpt) else -1
{quote}
But the commit 
https://github.com/apache/kafka/commit/adefc8ea076354e07839f0319fee1fba52343b91 
started using the default directly (and also added a timeout with default set 
in the same way).




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5837) ReassignPartitionsCommand fails if default throttle/timeout used

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153885#comment-16153885
 ] 

ASF GitHub Bot commented on KAFKA-5837:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3792

KAFKA-5837: Set defaults for ReassignPartitionsCommand correctly



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5837

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3792.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3792


commit ea852fc052bcb1bcabde249a315c006e7ef16d27
Author: Rajini Sivaram 
Date:   2017-09-05T15:51:32Z

KAFKA-5837: Set defaults for ReassignPartitionsCommand correctly




> ReassignPartitionsCommand fails if default throttle/timeout used
> 
>
> Key: KAFKA-5837
> URL: https://issues.apache.org/jira/browse/KAFKA-5837
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> System ReassignPartitionsTest.test_reassign_partitions failed with:
> {quote}
> Partitions reassignment failed due to java.lang.String cannot be cast to 
> java.lang.Long
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Long
>   at scala.runtime.BoxesRunTime.unboxToLong(BoxesRunTime.java:105)
>   at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:188)
>   at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:61)
>   at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> {quote}
> This is because default throttle is being set as a String rather than a Long. 
> Default throttle was never being set properly in the command opts, but it 
> didn't matter earlier because code used to set it explicitly:
> {quote}
> val throttle = if (opts.options.has(opts.throttleOpt)) 
> opts.options.valueOf(opts.throttleOpt) else -1
> {quote}
> But the commit 
> https://github.com/apache/kafka/commit/adefc8ea076354e07839f0319fee1fba52343b91
>  started using the default directly (and also added a timeout with default 
> set in the same way).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5836) Kafka Streams - API for specifying internal stream name on join

2017-09-05 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5836:
---
Labels: needs-kip  (was: )

> Kafka Streams - API for specifying internal stream name on join
> ---
>
> Key: KAFKA-5836
> URL: https://issues.apache.org/jira/browse/KAFKA-5836
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.11.0.0
>Reporter: Lovro Pandžić
>  Labels: needs-kip
>
> Automatic topic name can be problematic in case of streams operation 
> change/migration.
> I'd like to be able to specify name of an internal topic so I can avoid 
> creation of new stream and data "loss" when changing the Stream building.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5836) Kafka Streams - API for specifying internal stream name on join

2017-09-05 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5836:
---
Affects Version/s: 0.11.0.0

> Kafka Streams - API for specifying internal stream name on join
> ---
>
> Key: KAFKA-5836
> URL: https://issues.apache.org/jira/browse/KAFKA-5836
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.11.0.0
>Reporter: Lovro Pandžić
>  Labels: needs-kip
>
> Automatic topic name can be problematic in case of streams operation 
> change/migration.
> I'd like to be able to specify name of an internal topic so I can avoid 
> creation of new stream and data "loss" when changing the Stream building.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5037) Infinite loop if all input topics are unknown at startup

2017-09-05 Thread Michael Noll (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153928#comment-16153928
 ] 

Michael Noll commented on KAFKA-5037:
-

Ping [~mjsax]: do we target this for 1.0.0?

> Infinite loop if all input topics are unknown at startup
> 
>
> Key: KAFKA-5037
> URL: https://issues.apache.org/jira/browse/KAFKA-5037
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>  Labels: user-experience
> Fix For: 1.0.0
>
>
> See discusion: https://github.com/apache/kafka/pull/2815
> We will need some rewrite on {{StreamPartitionsAssignor}} and to add much 
> more test for all kind of corner cases, including pattern subscriptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5838) Speed up running system tests in docker a bit

2017-09-05 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5838:
--

 Summary: Speed up running system tests in docker a bit
 Key: KAFKA-5838
 URL: https://issues.apache.org/jira/browse/KAFKA-5838
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Speed up running system tests in docker a bit by using optimized sshd options.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5838) Speed up running system tests in docker a bit

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154083#comment-16154083
 ] 

ASF GitHub Bot commented on KAFKA-5838:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3794

KAFKA-5838. Speed up running system tests in docker a bit with better…

… sshd options

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5838

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3794.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3794


commit e610cf6386a6a273e2d0ca53f9c725807f213cb2
Author: Colin P. Mccabe 
Date:   2017-09-05T18:13:56Z

KAFKA-5838. Speed up running system tests in docker a bit with better sshd 
options




> Speed up running system tests in docker a bit
> -
>
> Key: KAFKA-5838
> URL: https://issues.apache.org/jira/browse/KAFKA-5838
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> Speed up running system tests in docker a bit by using optimized sshd options.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5016) Consumer hang in poll method while rebalancing is in progress

2017-09-05 Thread Geoffrey Stewart (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154137#comment-16154137
 ] 

Geoffrey Stewart commented on KAFKA-5016:
-

Thanks for your repsonse [~hachikuji]
The way you have explained this is certainly consistent with the behavior we 
are seeing.  I think the cause for my confusion comes from the Kafka docs.  For 
example, at the link:
[https://kafka.apache.org/documentation/#newconsumerconfigs]
If you search for the parameter "heartbeat.interval.ms" It is written here: 
"Heartbeats are used to ensure that the consumer's session stays active and to 
facilitate rebalancing when new consumers join or leave the group."
This gives the impression that the heartbeat is used to enable rebalances to 
complete, which is different than what you are saying.  Do you think that the 
docs should be updated to reflect the actual behavior?

> Consumer hang in poll method while rebalancing is in progress
> -
>
> Key: KAFKA-5016
> URL: https://issues.apache.org/jira/browse/KAFKA-5016
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Domenico Di Giulio
>Assignee: Vahid Hashemian
> Attachments: Kafka 0.10.2.0 Issue (TRACE) - Server + Client.txt, 
> Kafka 0.10.2.0 Issue (TRACE).txt, KAFKA_5016.java
>
>
> After moving to Kafka 0.10.2.0, it looks like I'm experiencing a hang in the 
> rebalancing code. 
> This is a test case, not (still) production code. It does the following with 
> a single-partition topic and two consumers in the same group:
> 1) a topic with one partition is forced to be created (auto-created)
> 2) a producer is used to write 10 messages
> 3) the first consumer reads all the messages and commits
> 4) the second consumer attempts a poll() and hangs indefinitely
> The same issue can't be found with 0.10.0.0.
> See the attached logs at TRACE level. Look for "SERVER HANGS" to see where 
> the hang is found: when this happens, the client keeps failing any hearbeat 
> attempt, as the rebalancing is in progress, and the poll method hangs 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5559) Metrics should throw if two client registers with same ID

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154199#comment-16154199
 ] 

ASF GitHub Bot commented on KAFKA-5559:
---

Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/3328


> Metrics should throw if two client registers with same ID
> -
>
> Key: KAFKA-5559
> URL: https://issues.apache.org/jira/browse/KAFKA-5559
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> Currently, {{AppInfoParser}} only logs a WARN message when a bean is 
> registered with an existing name. However, this should be treated as an error 
> and the exception should be rthrown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5559) Metrics should throw if two client registers with same ID

2017-09-05 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154204#comment-16154204
 ] 

Matthias J. Sax commented on KAFKA-5559:


As discussed, I am closing this as "not an issue". We can use KAFKA-3494 to 
tackle the underlying problem.

> Metrics should throw if two client registers with same ID
> -
>
> Key: KAFKA-5559
> URL: https://issues.apache.org/jira/browse/KAFKA-5559
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> Currently, {{AppInfoParser}} only logs a WARN message when a bean is 
> registered with an existing name. However, this should be treated as an error 
> and the exception should be rthrown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5559) Metrics should throw if two client registers with same ID

2017-09-05 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5559:
---
Description: Currently, {{AppInfoParser}} only logs a WARN message when a 
bean is registered with an existing name. However, this should be treated as an 
error and the exception should be thrown.  (was: Currently, {{AppInfoParser}} 
only logs a WARN message when a bean is registered with an existing name. 
However, this should be treated as an error and the exception should be 
rthrown.)

> Metrics should throw if two client registers with same ID
> -
>
> Key: KAFKA-5559
> URL: https://issues.apache.org/jira/browse/KAFKA-5559
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> Currently, {{AppInfoParser}} only logs a WARN message when a bean is 
> registered with an existing name. However, this should be treated as an error 
> and the exception should be thrown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5836) Kafka Streams - API for specifying internal stream name on join

2017-09-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5836:
-
Component/s: streams

> Kafka Streams - API for specifying internal stream name on join
> ---
>
> Key: KAFKA-5836
> URL: https://issues.apache.org/jira/browse/KAFKA-5836
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Lovro Pandžić
>  Labels: api, needs-kip
>
> Automatic topic name can be problematic in case of streams operation 
> change/migration.
> I'd like to be able to specify name of an internal topic so I can avoid 
> creation of new stream and data "loss" when changing the Stream building.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5836) Kafka Streams - API for specifying internal stream name on join

2017-09-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5836:
-
Labels: api needs-kip  (was: needs-kip)

> Kafka Streams - API for specifying internal stream name on join
> ---
>
> Key: KAFKA-5836
> URL: https://issues.apache.org/jira/browse/KAFKA-5836
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Lovro Pandžić
>  Labels: api, needs-kip
>
> Automatic topic name can be problematic in case of streams operation 
> change/migration.
> I'd like to be able to specify name of an internal topic so I can avoid 
> creation of new stream and data "loss" when changing the Stream building.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4327) Move Reset Tool from core to streams

2017-09-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154252#comment-16154252
 ] 

Guozhang Wang commented on KAFKA-4327:
--

While reviewing KIP-171 I was thinking that we do not have to call 
ConsumerGroupCommand to reset offsets, i.e. we just create a consumer and call 
commit API or just use the CommitOffsetRequest directly (admittedly it will 
have duplicated code). Thinking about it again I felt that this may not worth 
the effort, so leaving it in Core is okay as it should be part of the admin 
commands anyways.

> Move Reset Tool from core to streams
> 
>
> Key: KAFKA-4327
> URL: https://issues.apache.org/jira/browse/KAFKA-4327
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Jorge Quilcate
>Priority: Minor
>
> This is a follow up on https://issues.apache.org/jira/browse/KAFKA-4008
> Currently, Kafka Streams Application Reset Tool is part of {{core}} module 
> due to ZK dependency. After KIP-4 got merged, this dependency can be dropped 
> and the Reset Tool can be moved to {{streams}} module.
> This should also update {{InternalTopicManager#filterExistingTopics}} that 
> revers to ResetTool in an exception message:
> {{"Use 'kafka.tools.StreamsResetter' tool"}}
> -> {{"Use '" + kafka.tools.StreamsResetter.getClass().getName() + "' tool"}}
> Doing this JIRA also requires to update the docs with regard to broker 
> backward compatibility -- not all broker support "topic delete request" and 
> thus, the reset tool will not be backward compatible to all broker versions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5825) Streams not processing when exactly once is set

2017-09-05 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154256#comment-16154256
 ] 

Matthias J. Sax commented on KAFKA-5825:


I am not sure atm. \cc [~hachikuji] [~apurva] [~guozhang] 

> Streams not processing when exactly once is set
> ---
>
> Key: KAFKA-5825
> URL: https://issues.apache.org/jira/browse/KAFKA-5825
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
> Environment: EmbeddedKafka running on Windows.  Relevant files 
> attached.
>Reporter: Ryan Worsley
> Attachments: build.sbt, log4j.properties, Tests.scala
>
>
> +Set-up+
> I'm using [EmbeddedKafka|https://github.com/manub/scalatest-embedded-kafka/] 
> for ScalaTest.
> This spins up a single broker internally on a random port.
> I've written two tests - the first without transactions, the second with.  
> They're nearly identical apart from the config and the transactional 
> semantics.  I've written the transactional version based on Neha's 
> [blog|https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/]
>  which is the closest thing I could find to instructions.
> The tests wait until a single message is processed by the streams topology, 
> they use this message to complete a promise that the test is waiting on.  
> Once the promise completes the test verifies the value of the promise as 
> being the expected value of the message.
> +Observed behaviour+
> The first test passes fine, the second test times out, the stream processor 
> never seems to read the transactional message.
> +Notes+
> I've attached my build.sbt, log4j.properties and my Tests.scala file in order 
> to make it as easy as possible for someone to re-create.  I'm running on 
> Windows and using Scala as this reflects my workplace.  I completely expect 
> there to be some configuration issue that's causing this, but am unable to 
> proceed at this time.
> Related information: 
> https://github.com/manub/scalatest-embedded-kafka/issues/82



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2984) KTable should send old values along with new values to downstreams

2017-09-05 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154267#comment-16154267
 ] 

Matthias J. Sax commented on KAFKA-2984:


What do you exactly mean by this? If you think about DSL level, there are no 
plans to expose it. The DSL should be used to express your computation in an 
declarative way and you should not need to worry about implementation details 
like this. Why would you need to access this information?

> KTable should send old values along with new values to downstreams
> --
>
> Key: KAFKA-2984
> URL: https://issues.apache.org/jira/browse/KAFKA-2984
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.10.0.0
>
>
> Old values are necessary for implementing aggregate functions. KTable should 
> augment an event with its old value. Basically KTable stream is a stream of 
> (key, (new value, old value)) internally. The old value may be omitted when 
> it is not used in the topology.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4819) Expose states of active tasks to public API

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154319#comment-16154319
 ] 

ASF GitHub Bot commented on KAFKA-4819:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2612


> Expose states of active tasks to public API
> ---
>
> Key: KAFKA-4819
> URL: https://issues.apache.org/jira/browse/KAFKA-4819
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Florian Hussonnois
>Assignee: Florian Hussonnois
>Priority: Minor
>  Labels: kip
> Fix For: 1.0.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+130%3A+Expose+states+of+active+tasks+to+KafkaStreams+public+API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2017-09-05 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2729:
---
Affects Version/s: 0.9.0.0
   0.10.0.0
   0.10.1.0
   0.11.0.0

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.10.1.0, 0.11.0.0
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5783) Implement KafkaPrincipalBuilder interface with support for SASL (KIP-189)

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154345#comment-16154345
 ] 

ASF GitHub Bot commented on KAFKA-5783:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3795

KAFKA-5783: Add KafkaPrincipalBuilder with support for SASL (KIP-189)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5783

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3795


commit 3c2f106802508bd3cf305d08b8031442cc868f8f
Author: Jason Gustafson 
Date:   2017-08-24T22:47:31Z

KAFKA-5783: Add KafkaPrincipalBuilder with support for SASL (KIP-189)




> Implement KafkaPrincipalBuilder interface with support for SASL (KIP-189)
> -
>
> Key: KAFKA-5783
> URL: https://issues.apache.org/jira/browse/KAFKA-5783
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> This issue covers the implementation of 
> [KIP-189|https://cwiki.apache.org/confluence/display/KAFKA/KIP-189%3A+Improve+principal+builder+interface+and+add+support+for+SASL].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5541) Streams should not re-throw if suspending/closing task fails

2017-09-05 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5541:
---
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-5156

> Streams should not re-throw if suspending/closing task fails
> 
>
> Key: KAFKA-5541
> URL: https://issues.apache.org/jira/browse/KAFKA-5541
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>
> Currently, if Stream suspends a task on rebalance or closes a suspended task 
> that got revoked, it re-throws any exception that might occur and the thread 
> dies. However, this in not really necessary as the task was suspended/closed 
> anyway and we can just clean up the task and carry on with the rebalance.
> (cf comments https://github.com/apache/kafka/pull/3449#discussion_r124437816)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5833) Reset thread interrupt state in case of InterruptedException

2017-09-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5833:
-
Component/s: streams

> Reset thread interrupt state in case of InterruptedException
> 
>
> Key: KAFKA-5833
> URL: https://issues.apache.org/jira/browse/KAFKA-5833
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: newbie++
>
> There are some places where InterruptedException is caught but thread 
> interrupt state is not reset.
> e.g. from WorkerSourceTask#execute() :
> {code}
> } catch (InterruptedException e) {
> // Ignore and allow to exit.
> {code}
> Proper way of handling InterruptedException is to reset thread interrupt 
> state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5833) Reset thread interrupt state in case of InterruptedException

2017-09-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5833:
-
Labels: newbie++  (was: )

> Reset thread interrupt state in case of InterruptedException
> 
>
> Key: KAFKA-5833
> URL: https://issues.apache.org/jira/browse/KAFKA-5833
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: newbie++
>
> There are some places where InterruptedException is caught but thread 
> interrupt state is not reset.
> e.g. from WorkerSourceTask#execute() :
> {code}
> } catch (InterruptedException e) {
> // Ignore and allow to exit.
> {code}
> Proper way of handling InterruptedException is to reset thread interrupt 
> state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5603) Streams should not abort transaction when closing zombie task

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154514#comment-16154514
 ] 

ASF GitHub Bot commented on KAFKA-5603:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3719


> Streams should not abort transaction when closing zombie task
> -
>
> Key: KAFKA-5603
> URL: https://issues.apache.org/jira/browse/KAFKA-5603
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Critical
> Fix For: 0.11.0.1
>
>
> The contract of the transactional producer API is to not call any 
> transactional method after a {{ProducerFenced}} exception was thrown.
> Streams however, does an unconditional call within {{StreamTask#close()}} to 
> {{abortTransaction()}} in case of unclean shutdown. We need to distinguish 
> between a {{ProducerFenced}} and other unclean shutdown cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5833) Reset thread interrupt state in case of InterruptedException

2017-09-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-5833:
-
Component/s: (was: streams)

> Reset thread interrupt state in case of InterruptedException
> 
>
> Key: KAFKA-5833
> URL: https://issues.apache.org/jira/browse/KAFKA-5833
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: newbie++
>
> There are some places where InterruptedException is caught but thread 
> interrupt state is not reset.
> e.g. from WorkerSourceTask#execute() :
> {code}
> } catch (InterruptedException e) {
> // Ignore and allow to exit.
> {code}
> Proper way of handling InterruptedException is to reset thread interrupt 
> state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5603) Streams should not abort transaction when closing zombie task

2017-09-05 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5603:
---
Fix Version/s: 1.0.0

> Streams should not abort transaction when closing zombie task
> -
>
> Key: KAFKA-5603
> URL: https://issues.apache.org/jira/browse/KAFKA-5603
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Critical
> Fix For: 0.11.0.1, 1.0.0
>
>
> The contract of the transactional producer API is to not call any 
> transactional method after a {{ProducerFenced}} exception was thrown.
> Streams however, does an unconditional call within {{StreamTask#close()}} to 
> {{abortTransaction()}} in case of unclean shutdown. We need to distinguish 
> between a {{ProducerFenced}} and other unclean shutdown cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4860) Kafka batch files does not support path with spaces

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154558#comment-16154558
 ] 

ASF GitHub Bot commented on KAFKA-4860:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2649


> Kafka batch files does not support path with spaces
> ---
>
> Key: KAFKA-4860
> URL: https://issues.apache.org/jira/browse/KAFKA-4860
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
> Environment: windows
>Reporter: Vladimír Kleštinec
>Priority: Minor
> Fix For: 1.0.0
>
>
> When we install kafka on windows to path that contains spaces e.g. C:\Program 
> Files\ApacheKafkabatch files located in bin/windows don't work.
> Workaround: install on path without spaces



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5837) ReassignPartitionsCommand fails if default throttle/timeout used

2017-09-05 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-5837.
---
Resolution: Fixed

Issue resolved by pull request 3792
[https://github.com/apache/kafka/pull/3792]

> ReassignPartitionsCommand fails if default throttle/timeout used
> 
>
> Key: KAFKA-5837
> URL: https://issues.apache.org/jira/browse/KAFKA-5837
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> System ReassignPartitionsTest.test_reassign_partitions failed with:
> {quote}
> Partitions reassignment failed due to java.lang.String cannot be cast to 
> java.lang.Long
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Long
>   at scala.runtime.BoxesRunTime.unboxToLong(BoxesRunTime.java:105)
>   at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:188)
>   at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:61)
>   at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> {quote}
> This is because default throttle is being set as a String rather than a Long. 
> Default throttle was never being set properly in the command opts, but it 
> didn't matter earlier because code used to set it explicitly:
> {quote}
> val throttle = if (opts.options.has(opts.throttleOpt)) 
> opts.options.valueOf(opts.throttleOpt) else -1
> {quote}
> But the commit 
> https://github.com/apache/kafka/commit/adefc8ea076354e07839f0319fee1fba52343b91
>  started using the default directly (and also added a timeout with default 
> set in the same way).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5837) ReassignPartitionsCommand fails if default throttle/timeout used

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154569#comment-16154569
 ] 

ASF GitHub Bot commented on KAFKA-5837:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3792


> ReassignPartitionsCommand fails if default throttle/timeout used
> 
>
> Key: KAFKA-5837
> URL: https://issues.apache.org/jira/browse/KAFKA-5837
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> System ReassignPartitionsTest.test_reassign_partitions failed with:
> {quote}
> Partitions reassignment failed due to java.lang.String cannot be cast to 
> java.lang.Long
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Long
>   at scala.runtime.BoxesRunTime.unboxToLong(BoxesRunTime.java:105)
>   at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:188)
>   at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:61)
>   at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> {quote}
> This is because default throttle is being set as a String rather than a Long. 
> Default throttle was never being set properly in the command opts, but it 
> didn't matter earlier because code used to set it explicitly:
> {quote}
> val throttle = if (opts.options.has(opts.throttleOpt)) 
> opts.options.valueOf(opts.throttleOpt) else -1
> {quote}
> But the commit 
> https://github.com/apache/kafka/commit/adefc8ea076354e07839f0319fee1fba52343b91
>  started using the default directly (and also added a timeout with default 
> set in the same way).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5839) Upgrade Guide doc changes for KIP-130

2017-09-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-5839:


 Summary: Upgrade Guide doc changes for KIP-130
 Key: KAFKA-5839
 URL: https://issues.apache.org/jira/browse/KAFKA-5839
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Florian Hussonnois


Related web docs:

1. developer guide.
2. upgrade guide.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5840) TransactionsTest#testBasicTransactions hangs

2017-09-05 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-5840:
-

 Summary: TransactionsTest#testBasicTransactions hangs
 Key: KAFKA-5840
 URL: https://issues.apache.org/jira/browse/KAFKA-5840
 Project: Kafka
  Issue Type: Test
Reporter: Ted Yu


Here is part of the stack trace:
{code}
"Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting on 
condition [0x7feb05f8c000]
   java.lang.Thread.State: WAITING (parking)
  at sun.misc.Unsafe.park(Native Method)
  - parking to wait for  <0x81272ec0> (a 
java.util.concurrent.CountDownLatch$Sync)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
  at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
  at 
org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
  at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
  at 
org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948)
  at kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93)
{code}
{code}
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
2017-04-03T19:39:06Z)
Maven home: /apache-maven-3.5.0
Java version: 1.8.0_131, vendor: Oracle Corporation
Java home: /jdk1.8.0_131/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", family: 
"unix"
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5840) TransactionsTest#testBasicTransactions hangs

2017-09-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-5840:
--
Attachment: 5840.stack

> TransactionsTest#testBasicTransactions hangs
> 
>
> Key: KAFKA-5840
> URL: https://issues.apache.org/jira/browse/KAFKA-5840
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 5840.stack
>
>
> Here is part of the stack trace:
> {code}
> "Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting 
> on condition [0x7feb05f8c000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x81272ec0> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948)
>   at 
> kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93)
> {code}
> {code}
> Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
> 2017-04-03T19:39:06Z)
> Maven home: /apache-maven-3.5.0
> Java version: 1.8.0_131, vendor: Oracle Corporation
> Java home: /jdk1.8.0_131/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", 
> family: "unix"
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5840) TransactionsTest#testBasicTransactions hangs

2017-09-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-5840:
--
Description: 
While testing 0.11.0.1 RC0 , I found TransactionsTest hanging.

Here is part of the stack trace:
{code}
"Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting on 
condition [0x7feb05f8c000]
   java.lang.Thread.State: WAITING (parking)
  at sun.misc.Unsafe.park(Native Method)
  - parking to wait for  <0x81272ec0> (a 
java.util.concurrent.CountDownLatch$Sync)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
  at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
  at 
org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
  at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
  at 
org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948)
  at kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93)
{code}
{code}
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
2017-04-03T19:39:06Z)
Maven home: /apache-maven-3.5.0
Java version: 1.8.0_131, vendor: Oracle Corporation
Java home: /jdk1.8.0_131/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", family: 
"unix"
{code}

  was:
While testing 0.11.0.1 RC0 , I found 
Here is part of the stack trace:
{code}
"Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting on 
condition [0x7feb05f8c000]
   java.lang.Thread.State: WAITING (parking)
  at sun.misc.Unsafe.park(Native Method)
  - parking to wait for  <0x81272ec0> (a 
java.util.concurrent.CountDownLatch$Sync)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
  at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
  at 
org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
  at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
  at 
org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948)
  at kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93)
{code}
{code}
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
2017-04-03T19:39:06Z)
Maven home: /apache-maven-3.5.0
Java version: 1.8.0_131, vendor: Oracle Corporation
Java home: /jdk1.8.0_131/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", family: 
"unix"
{code}


> TransactionsTest#testBasicTransactions hangs
> 
>
> Key: KAFKA-5840
> URL: https://issues.apache.org/jira/browse/KAFKA-5840
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 5840.stack
>
>
> While testing 0.11.0.1 RC0 , I found TransactionsTest hanging.
> Here is part of the stack trace:
> {code}
> "Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting 
> on condition [0x7feb05f8c000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x81272ec0> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
>   at 
> org.

[jira] [Updated] (KAFKA-5840) TransactionsTest#testBasicTransactions hangs

2017-09-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-5840:
--
Description: 
While testing 0.11.0.1 RC0 , I found 
Here is part of the stack trace:
{code}
"Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting on 
condition [0x7feb05f8c000]
   java.lang.Thread.State: WAITING (parking)
  at sun.misc.Unsafe.park(Native Method)
  - parking to wait for  <0x81272ec0> (a 
java.util.concurrent.CountDownLatch$Sync)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
  at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
  at 
org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
  at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
  at 
org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948)
  at kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93)
{code}
{code}
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
2017-04-03T19:39:06Z)
Maven home: /apache-maven-3.5.0
Java version: 1.8.0_131, vendor: Oracle Corporation
Java home: /jdk1.8.0_131/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", family: 
"unix"
{code}

  was:
Here is part of the stack trace:
{code}
"Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting on 
condition [0x7feb05f8c000]
   java.lang.Thread.State: WAITING (parking)
  at sun.misc.Unsafe.park(Native Method)
  - parking to wait for  <0x81272ec0> (a 
java.util.concurrent.CountDownLatch$Sync)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
  at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
  at 
org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
  at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
  at 
org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948)
  at kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93)
{code}
{code}
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
2017-04-03T19:39:06Z)
Maven home: /apache-maven-3.5.0
Java version: 1.8.0_131, vendor: Oracle Corporation
Java home: /jdk1.8.0_131/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", family: 
"unix"
{code}


> TransactionsTest#testBasicTransactions hangs
> 
>
> Key: KAFKA-5840
> URL: https://issues.apache.org/jira/browse/KAFKA-5840
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 5840.stack
>
>
> While testing 0.11.0.1 RC0 , I found 
> Here is part of the stack trace:
> {code}
> "Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting 
> on condition [0x7feb05f8c000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x81272ec0> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948)
>   at 
> kafka

[jira] [Commented] (KAFKA-5823) Update Docs

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154590#comment-16154590
 ] 

ASF GitHub Bot commented on KAFKA-5823:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3787


> Update Docs
> ---
>
> Key: KAFKA-5823
> URL: https://issues.apache.org/jira/browse/KAFKA-5823
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5840) TransactionsTest#testBasicTransactions sometimes hangs

2017-09-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-5840:
--
Summary: TransactionsTest#testBasicTransactions sometimes hangs  (was: 
TransactionsTest#testBasicTransactions hangs)

> TransactionsTest#testBasicTransactions sometimes hangs
> --
>
> Key: KAFKA-5840
> URL: https://issues.apache.org/jira/browse/KAFKA-5840
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 5840.stack
>
>
> While testing 0.11.0.1 RC0 , I found TransactionsTest hanging.
> Here is part of the stack trace:
> {code}
> "Test worker" #20 prio=5 os_prio=0 tid=0x7feb449fc000 nid=0x5f69 waiting 
> on condition [0x7feb05f8c000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x81272ec0> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:573)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:948)
>   at 
> kafka.api.TransactionsTest.testBasicTransactions(TransactionsTest.scala:93)
> {code}
> {code}
> Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
> 2017-04-03T19:39:06Z)
> Maven home: /apache-maven-3.5.0
> Java version: 1.8.0_131, vendor: Oracle Corporation
> Java home: /jdk1.8.0_131/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-327.28.3.el7.x86_64", arch: "amd64", 
> family: "unix"
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5841) Open old index files with read-only permission

2017-09-05 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5841:
--

 Summary: Open old index files with read-only permission
 Key: KAFKA-5841
 URL: https://issues.apache.org/jira/browse/KAFKA-5841
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


Since old index files do not change, we may as well drop the write permission 
needed when opening them. From the doc comments in {{OffsetIndex}}, it sounds 
like we may have had this implemented at one point:

{code}
 * Index files can be opened in two ways: either as an empty, mutable index 
that allows appends or
 * an immutable read-only index file that has previously been populated. The 
makeReadOnly method will turn a mutable file into an 
 * immutable one and truncate off any extra bytes. This is done when the index 
file is rolled over.
{code}

So we should either support this or (if there is good reason not to) update the 
comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3856) Cleanup Kafka Streams builder API

2017-09-05 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-3856.

Resolution: Fixed

> Cleanup Kafka Streams builder API
> -
>
> Key: KAFKA-3856
> URL: https://issues.apache.org/jira/browse/KAFKA-3856
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: api, kip
> Fix For: 1.0.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-120%3A+Cleanup+Kafka+Streams+builder+API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4366) KafkaStreams.close() blocks indefinitely

2017-09-05 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154625#comment-16154625
 ] 

Matthias J. Sax commented on KAFKA-4366:


- You will not loose messages.
- {{KafkaStreams}} wait until all {{StreamThread}}s did shut down -- if this 
does not happen within timeout, {{close()}} just returns (cf. 
https://github.com/apache/kafka/blob/0.11.0.0/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java#L520)
- if {{close(0)}} is called, _no_ timeout is applied and {{KafkaStreams}} waits 
forever -- thus, timeout must be at least 1ms to be effective (an described in 
the JavaDocs): {{A timeout of 0 means to wait forever.}}

> KafkaStreams.close() blocks indefinitely
> 
>
> Key: KAFKA-4366
> URL: https://issues.apache.org/jira/browse/KAFKA-4366
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1, 0.10.1.0
>Reporter: Michal Borowiecki
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
> Attachments: threadDump.log
>
>
> KafkaStreams.close() method calls join on all its threads without a timeout, 
> meaning indefinitely, which makes it prone to deadlocks and unfit to be used 
> in shutdown hooks.
> (KafkaStreams::close is used in numerous examples by confluent: 
> https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-cp-3.0.1/kafka-streams/src/main/java/io/confluent/examples/streams
>  and 
> https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
>  so we assumed it to be recommended practice)
> A deadlock happens, for instance, if System.exit() is called from within the 
> uncaughtExceptionHandler. (We need to call System.exit() from the 
> uncaughtExceptionHandler because KAFKA-4355 issue shuts down the StreamThread 
> and to recover we want the process to exit, as our infrastructure will then 
> start it up again.)
> The System.exit call (from the uncaughtExceptionHandler, which runs in the 
> StreamThread) will execute the shutdown hook in a new thread and wait for 
> that thread to join. If the shutdown hook calls KafkaStreams.close, it will 
> in turn block waiting for the StreamThread to join, hence the deadlock.
> Runtime.addShutdownHook javadocs state:
> {quote}
> Shutdown hooks run at a delicate time in the life cycle of a virtual machine 
> and should therefore be coded defensively. They should, in particular, be 
> written to be thread-safe and to avoid deadlocks insofar as possible
> {quote}
> and
> {quote}
> Shutdown hooks should also finish their work quickly.
> {quote}
> Therefore the current implementation of KafkaStreams.close() which waits 
> forever for threads to join is completely unsuitable for use in a shutdown 
> hook. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5597) Autogenerate Producer sender metrics

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154639#comment-16154639
 ] 

ASF GitHub Bot commented on KAFKA-5597:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3535


> Autogenerate Producer sender metrics
> 
>
> Key: KAFKA-5597
> URL: https://issues.apache.org/jira/browse/KAFKA-5597
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: James Cheng
>Assignee: James Cheng
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5825) Streams not processing when exactly once is set

2017-09-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154644#comment-16154644
 ] 

Guozhang Wang commented on KAFKA-5825:
--

These DEBUG logs are normal, they did not indicate any issues at least from the 
broker side.

> Streams not processing when exactly once is set
> ---
>
> Key: KAFKA-5825
> URL: https://issues.apache.org/jira/browse/KAFKA-5825
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
> Environment: EmbeddedKafka running on Windows.  Relevant files 
> attached.
>Reporter: Ryan Worsley
> Attachments: build.sbt, log4j.properties, Tests.scala
>
>
> +Set-up+
> I'm using [EmbeddedKafka|https://github.com/manub/scalatest-embedded-kafka/] 
> for ScalaTest.
> This spins up a single broker internally on a random port.
> I've written two tests - the first without transactions, the second with.  
> They're nearly identical apart from the config and the transactional 
> semantics.  I've written the transactional version based on Neha's 
> [blog|https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/]
>  which is the closest thing I could find to instructions.
> The tests wait until a single message is processed by the streams topology, 
> they use this message to complete a promise that the test is waiting on.  
> Once the promise completes the test verifies the value of the promise as 
> being the expected value of the message.
> +Observed behaviour+
> The first test passes fine, the second test times out, the stream processor 
> never seems to read the transactional message.
> +Notes+
> I've attached my build.sbt, log4j.properties and my Tests.scala file in order 
> to make it as easy as possible for someone to re-create.  I'm running on 
> Windows and using Scala as this reflects my workplace.  I completely expect 
> there to be some configuration issue that's causing this, but am unable to 
> proceed at this time.
> Related information: 
> https://github.com/manub/scalatest-embedded-kafka/issues/82



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5825) Streams not processing when exactly once is set

2017-09-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154645#comment-16154645
 ] 

Guozhang Wang commented on KAFKA-5825:
--

[~ryanworsley] At the moment I cannot think of any causes on top of my head for 
your scenario, you would need to look at the client-side (not broker-side) 
Streams logs to see what it gets blocked on.

> Streams not processing when exactly once is set
> ---
>
> Key: KAFKA-5825
> URL: https://issues.apache.org/jira/browse/KAFKA-5825
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
> Environment: EmbeddedKafka running on Windows.  Relevant files 
> attached.
>Reporter: Ryan Worsley
> Attachments: build.sbt, log4j.properties, Tests.scala
>
>
> +Set-up+
> I'm using [EmbeddedKafka|https://github.com/manub/scalatest-embedded-kafka/] 
> for ScalaTest.
> This spins up a single broker internally on a random port.
> I've written two tests - the first without transactions, the second with.  
> They're nearly identical apart from the config and the transactional 
> semantics.  I've written the transactional version based on Neha's 
> [blog|https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/]
>  which is the closest thing I could find to instructions.
> The tests wait until a single message is processed by the streams topology, 
> they use this message to complete a promise that the test is waiting on.  
> Once the promise completes the test verifies the value of the promise as 
> being the expected value of the message.
> +Observed behaviour+
> The first test passes fine, the second test times out, the stream processor 
> never seems to read the transactional message.
> +Notes+
> I've attached my build.sbt, log4j.properties and my Tests.scala file in order 
> to make it as easy as possible for someone to re-create.  I'm running on 
> Windows and using Scala as this reflects my workplace.  I completely expect 
> there to be some configuration issue that's causing this, but am unable to 
> proceed at this time.
> Related information: 
> https://github.com/manub/scalatest-embedded-kafka/issues/82



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5523) ReplayLogProducer not using the new Kafka consumer

2017-09-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154647#comment-16154647
 ] 

Guozhang Wang commented on KAFKA-5523:
--

Sounds good to me.

> ReplayLogProducer not using the new Kafka consumer
> --
>
> Key: KAFKA-5523
> URL: https://issues.apache.org/jira/browse/KAFKA-5523
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> the ReplayLogProducer is using the latest Kafka producer but not the latest 
> Kafka consumer. Is this tool today deprecated ? I see that something like 
> that could be done using the MirrorMaker. [~ijuma] Does it make sense to 
> update the ReplayLogProducer to the latest Kafka consumer ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5842) QueryableStateIntegrationTest may fail with JDK 7

2017-09-05 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-5842:
-

 Summary: QueryableStateIntegrationTest may fail with JDK 7
 Key: KAFKA-5842
 URL: https://issues.apache.org/jira/browse/KAFKA-5842
 Project: Kafka
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


Found the following when running test suite for 0.11.0.1 RC0 :
{code}
org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses FAILED
java.lang.AssertionError: Key not found one
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyGreaterOrEqual(QueryableStateIntegrationTest.java:893)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.concurrentAccesses(QueryableStateIntegrationTest.java:399)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5358) Consumer perf tool should count rebalance time separately

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154731#comment-16154731
 ] 

ASF GitHub Bot commented on KAFKA-5358:
---

GitHub user huxihx reopened a pull request:

https://github.com/apache/kafka/pull/3188

KAFKA-5358: Consumer perf tool should count rebalance time.

Added 'join.group.ms' for new consumer to calculate the time of joining 
group. 

@hachikuji  Please review the PR. Thanks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka KAKFA-5358

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3188


commit be43bf3a1257ca5f058e38af3e185ba775749614
Author: amethystic 
Date:   2017-06-01T09:15:15Z

KAFKA-5358: Consumer perf tool should count rebalance time.

Added 'join.group.ms' for new consumer to calculate the time of joining 
group.

commit cbdf6c10d12e1b0528dc45c078e61bb6ee1b0d2f
Author: amethystic 
Date:   2017-06-02T07:11:03Z

1. Refined the name to `total.rebalance.time`
2. Added `total.fetch.time`
3. Add support to count total time for multiple rebalances

commit 7600e6b994c251422444c905f5d4c718bf6f9935
Author: huxihx 
Date:   2017-06-06T00:42:29Z

Refined time counting for both fetcher and rebalance as per hachikuji's 
comments.

commit 08ff452fcb41136b54cfc2850540b18d737e909d
Author: huxihx 
Date:   2017-06-07T06:27:00Z

Correct the counting for total fetch time.

commit 8c80f1376dba7ac715fa0887e128293690a7014c
Author: huxihx 
Date:   2017-06-08T01:31:49Z

1. Split `MB.sec` into two parts: `total.MB.sec` and `fetch.MB.sec`
2. Ditto for `nMsg.sec`
3. Refined output format

commit edf1d0888728bc5c39b6ee59d23d3eed108243c6
Author: huxihx 
Date:   2017-06-21T01:24:28Z

returned back to the original output format for new consumer

commit 722e16df1b96275a04bd2e6a5e3deec476b028a6
Author: huxihx 
Date:   2017-06-27T02:03:55Z

As per hackikuji's comments, refined code to print out rebalance time even 
when  is set.

commit 57bd0e44723b681a191a12135e3cc60188c18e9d
Author: huxihx 
Date:   2017-08-04T02:27:51Z

resovled conflicts with trunk

commit 16cc2cab52c394ed4688fe377496cd98d1e2eb4b
Author: huxi 
Date:   2017-08-04T03:42:17Z

Merge branch 'trunk' into KAKFA-5358

commit 37982726a6fd09ef202eb428a2dd364aded1b929
Author: huxihx 
Date:   2017-08-04T04:06:25Z

KAFKA-5358: Did not show newly-created headers if `--show-detailed-stats` 
is set since rebalance time does not change during most of the consuming rounds.

commit bbba9b00a6538e69bc4261ed456a31993e42650e
Author: huxihx 
Date:   2017-08-09T02:15:56Z

correct printHeader invoking to have testHeaderMatchBody passed

commit 3a1ce0f8897577608182ffe1a8ce2f9bfbe41f55
Author: huxihx 
Date:   2017-08-17T02:44:23Z

Added newly-created fields for detailed views.

commit c51781586646664bb9514d8ab03aa974f9e6a942
Author: huxihx 
Date:   2017-08-21T01:18:24Z

1. Added parameter name when invoking testDetailedHeaderMatchBody; 2. 
Removed useless initialization for

commit 9adda74004c531894a31b7653529b32336682316
Author: huxihx 
Date:   2017-09-06T02:41:42Z

added a field that tracks periodic join time.




> Consumer perf tool should count rebalance time separately
> -
>
> Key: KAFKA-5358
> URL: https://issues.apache.org/jira/browse/KAFKA-5358
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>
> It would be helpful to measure rebalance time separately in the performance 
> tool so that throughput between different versions can be compared more 
> easily in spite of improvements such as 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-134%3A+Delay+initial+consumer+group+rebalance.
>  At the moment, running the perf tool on 0.11.0 or trunk for a short amount 
> of time will present a severely skewed picture since the overall time will be 
> dominated by the join group delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5358) Consumer perf tool should count rebalance time separately

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154732#comment-16154732
 ] 

ASF GitHub Bot commented on KAFKA-5358:
---

Github user huxihx closed the pull request at:

https://github.com/apache/kafka/pull/3723


> Consumer perf tool should count rebalance time separately
> -
>
> Key: KAFKA-5358
> URL: https://issues.apache.org/jira/browse/KAFKA-5358
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>
> It would be helpful to measure rebalance time separately in the performance 
> tool so that throughput between different versions can be compared more 
> easily in spite of improvements such as 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-134%3A+Delay+initial+consumer+group+rebalance.
>  At the moment, running the perf tool on 0.11.0 or trunk for a short amount 
> of time will present a severely skewed picture since the overall time will be 
> dominated by the join group delay.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5843) Mx4jLoader.maybeLoad should only be executed if kafka_mx4jenable is set to true

2017-09-05 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-5843:
---

 Summary: Mx4jLoader.maybeLoad should only be executed if 
kafka_mx4jenable is set to true
 Key: KAFKA-5843
 URL: https://issues.apache.org/jira/browse/KAFKA-5843
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5843) Mx4jLoader.maybeLoad should only be executed if kafka_mx4jenable is set to true

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154820#comment-16154820
 ] 

ASF GitHub Bot commented on KAFKA-5843:
---

GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/3797

KAFKA-5843; Mx4jLoader.maybeLoad should only be executed if 
kafka_mx4jenable is set to true



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-5843

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3797


commit 5c1bd1c09d975dfde7f1831bf5cd091b4440bfa1
Author: Dong Lin 
Date:   2017-09-06T05:11:46Z

KAFKA-5843; Mx4jLoader.maybeLoad should only be executed if 
kafka_mx4jenable is set to true




> Mx4jLoader.maybeLoad should only be executed if kafka_mx4jenable is set to 
> true
> ---
>
> Key: KAFKA-5843
> URL: https://issues.apache.org/jira/browse/KAFKA-5843
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5825) Streams not processing when exactly once is set

2017-09-05 Thread Ryan Worsley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154827#comment-16154827
 ] 

Ryan Worsley commented on KAFKA-5825:
-

Thanks [~guozhang] I don't get any obvious streams issues logged.

The code is attached to the ticket - it's the simplest example I could come up 
with.

Would appreciate someone glancing at it and maybe running it?

> Streams not processing when exactly once is set
> ---
>
> Key: KAFKA-5825
> URL: https://issues.apache.org/jira/browse/KAFKA-5825
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
> Environment: EmbeddedKafka running on Windows.  Relevant files 
> attached.
>Reporter: Ryan Worsley
> Attachments: build.sbt, log4j.properties, Tests.scala
>
>
> +Set-up+
> I'm using [EmbeddedKafka|https://github.com/manub/scalatest-embedded-kafka/] 
> for ScalaTest.
> This spins up a single broker internally on a random port.
> I've written two tests - the first without transactions, the second with.  
> They're nearly identical apart from the config and the transactional 
> semantics.  I've written the transactional version based on Neha's 
> [blog|https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/]
>  which is the closest thing I could find to instructions.
> The tests wait until a single message is processed by the streams topology, 
> they use this message to complete a promise that the test is waiting on.  
> Once the promise completes the test verifies the value of the promise as 
> being the expected value of the message.
> +Observed behaviour+
> The first test passes fine, the second test times out, the stream processor 
> never seems to read the transactional message.
> +Notes+
> I've attached my build.sbt, log4j.properties and my Tests.scala file in order 
> to make it as easy as possible for someone to re-create.  I'm running on 
> Windows and using Scala as this reflects my workplace.  I completely expect 
> there to be some configuration issue that's causing this, but am unable to 
> proceed at this time.
> Related information: 
> https://github.com/manub/scalatest-embedded-kafka/issues/82



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4116) Specifying 0.0.0.0 in "listeners" doesn't work

2017-09-05 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-4116:
-
Fix Version/s: 1.0.0

> Specifying 0.0.0.0 in "listeners" doesn't work
> --
>
> Key: KAFKA-4116
> URL: https://issues.apache.org/jira/browse/KAFKA-4116
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Yuto Kawamura
>Assignee: Yuto Kawamura
> Fix For: 1.0.0
>
>
> The document of {{listeners}} says:
> "Specify hostname as 0.0.0.0 to bind to all interfaces."
> However when I give config such as below, a started kafka broker can't join 
> the cluster due to invalid address advertised on zk.
> {code}
> listeners=PLAINTEXT://0.0.0.0:9092
> # advertised.listeners=
> {code}
> This is because of:
> - {{advertised.listeners}} which is used as an address to publish on zk 
> defaults to {{listeners}}
> - KafkaHealthcheck#register isn't considering the host "0.0.0.0" as a special 
> case : 
> https://github.com/apache/kafka/blob/8f3462552fa4d6a6d70a837c2ef7439bba512657/core/src/main/scala/kafka/server/KafkaHealthcheck.scala#L60-L61
> h3. Proof
> Test environment:
> - kafka-broker version 0.10.1.0-SNAPSHOT(build from trunk)
> - Brokers HOST-A, HOST-B, HOST-C
> - Controller: HOST-A
> - topic-A has 3 replicas, 3 partitions
> Update HOST-B's server.properties with updating listeners to below and 
> restart the broker.
> {code}
> listeners=PLAINTEXT://0.0.0.0:9092
> {code}
> Then HOST-B registeres it's broker info to ZK path {{/brokers/ids/2}}, but 
> "0.0.0.0" is used as it's host:
> {code}
> [zk: ZKHOST1:2181,ZKHOST2:2181,ZKHOST3:2181/kafka-test(CONNECTED) 8] get 
> /brokers/ids/2
> {"jmx_port":12345,"timestamp":"1472796372181","endpoints":["PLAINTEXT://0.0.0.0:9092"],"host":"0.0.0.0","version":3,"port":9092}
> {code}
> Controller tries to send an request to the above address but of course it 
> will never reach to the HOST-B.
> controller.log:
> {code}
> [2016-09-02 15:06:12,206] INFO [Controller-1-to-broker-2-send-thread], 
> Controller 1 connected to 0.0.0.0:9092 (id: 2 rack: null) for sending state 
> change requests (kafka.controller.RequestSendThread)
> {code}
> I'm guessing maybe controller sending a request to itself(kafka broker 
> working on the same instance), as calling connect("0.0.0.0") results to 
> connect to localhost, which sounds scary but havn't digged into.
> So the ISR won't recovered even through a broker starts up.
> {code}
> ./kafka-topics.sh ... --describe --topic topic-A
> Topic:topic-A   PartitionCount:3ReplicationFactor:3 
> Configs:retention.ms=8640,min.insync.replicas=2
> Topic: topic-A  Partition: 0Leader: 3   Replicas: 3,2,1 Isr: 
> 1,3
> Topic: topic-A  Partition: 1Leader: 1   Replicas: 1,3,2 Isr: 
> 1,3
> Topic: topic-A  Partition: 2Leader: 1   Replicas: 2,1,3 Isr: 
> 1,3
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5841) Open old index files with read-only permission

2017-09-05 Thread huxihx (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-5841:
-

Assignee: huxihx

> Open old index files with read-only permission
> --
>
> Key: KAFKA-5841
> URL: https://issues.apache.org/jira/browse/KAFKA-5841
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>
> Since old index files do not change, we may as well drop the write permission 
> needed when opening them. From the doc comments in {{OffsetIndex}}, it sounds 
> like we may have had this implemented at one point:
> {code}
>  * Index files can be opened in two ways: either as an empty, mutable index 
> that allows appends or
>  * an immutable read-only index file that has previously been populated. The 
> makeReadOnly method will turn a mutable file into an 
>  * immutable one and truncate off any extra bytes. This is done when the 
> index file is rolled over.
> {code}
> So we should either support this or (if there is good reason not to) update 
> the comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5841) Open old index files with read-only permission

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154895#comment-16154895
 ] 

ASF GitHub Bot commented on KAFKA-5841:
---

GitHub user huxihx opened a pull request:

https://github.com/apache/kafka/pull/3798

KAFKA-5841: AbstractIndex should offer `makeReadOnly` method

AbstractIndex should offer `makeReadOnly` method that changed the 
underlying MappedByteBuffer read-only.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka KAFKA-5841

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3798.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3798


commit a2d97ff6c814368ac7e7eadc63569de36d3965af
Author: huxihx 
Date:   2017-09-06T06:48:54Z

KAFKA-5841: AbstractIndex should offer `makeReadOnly` method as mentioned 
in comments




> Open old index files with read-only permission
> --
>
> Key: KAFKA-5841
> URL: https://issues.apache.org/jira/browse/KAFKA-5841
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>
> Since old index files do not change, we may as well drop the write permission 
> needed when opening them. From the doc comments in {{OffsetIndex}}, it sounds 
> like we may have had this implemented at one point:
> {code}
>  * Index files can be opened in two ways: either as an empty, mutable index 
> that allows appends or
>  * an immutable read-only index file that has previously been populated. The 
> makeReadOnly method will turn a mutable file into an 
>  * immutable one and truncate off any extra bytes. This is done when the 
> index file is rolled over.
> {code}
> So we should either support this or (if there is good reason not to) update 
> the comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)