[jira] [Created] (KAFKA-5824) Cannot write to key value store provided by ProcessorTopologyTestDriver

2017-09-01 Thread Dmitry Minkovsky (JIRA)
Dmitry Minkovsky created KAFKA-5824:
---

 Summary: Cannot write to key value store provided by 
ProcessorTopologyTestDriver
 Key: KAFKA-5824
 URL: https://issues.apache.org/jira/browse/KAFKA-5824
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.0
Reporter: Dmitry Minkovsky


I am trying to `put()` to a KeyValueStore that I got from 
ProcessorTopologyTestDriver#getKeyValueStore() as part of setup for a test. The 
JavaDoc endorses this use-case:

 * This is often useful in test cases to pre-populate the store before the 
test case instructs the topology to
 * {@link #process(String, byte[], byte[]) process an input message}, 
and/or to check the store afterward.

However, the `put()` results in the following error: 

{{
java.lang.IllegalStateException: This should not happen as offset() should only 
be called while a record is processed

at 
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.offset(AbstractProcessorContext.java:139)
at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:193)
at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.put(CachingKeyValueStore.java:188)
at 
pony.UserEntityTopologySupplierTest.confirm-settings-requests(UserEntityTopologySupplierTest.groovy:81)
}}

This error seems straightforward: I am not doing the `put` within the context 
of stream processing. How do I reconcile this with the fact that I am trying to 
populate the store for a test, which the JavaDoc endorses?

Thank you,
Dmitry



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5823) Update Docs

2017-09-01 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5823:
--

 Summary: Update Docs
 Key: KAFKA-5823
 URL: https://issues.apache.org/jira/browse/KAFKA-5823
 Project: Kafka
  Issue Type: Sub-task
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5822) Consistent logging of topic partitions

2017-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151348#comment-16151348
 ] 

ASF GitHub Bot commented on KAFKA-5822:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3778

KAFKA-5822: Consistent log formatting of topic partitions



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5822

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3778.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3778


commit a40373c229510fcadd6156253a59f6d7badf5d47
Author: Jason Gustafson 
Date:   2017-09-01T23:35:05Z

KAFKA-5822: Consistent log formatting of topic partitions




> Consistent logging of topic partitions
> --
>
> Key: KAFKA-5822
> URL: https://issues.apache.org/jira/browse/KAFKA-5822
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>  Labels: newbie
>
> In some cases partitions are logged as "[topic,partition]" and in others as 
> "topic-partition." It would be nice to standardize to make searching easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5822) Consistent logging of topic partitions

2017-09-01 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151271#comment-16151271
 ] 

Richard Yu commented on KAFKA-5822:
---

Is it fine if  I contribute to this case?

Thanks.

> Consistent logging of topic partitions
> --
>
> Key: KAFKA-5822
> URL: https://issues.apache.org/jira/browse/KAFKA-5822
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>  Labels: newbie
>
> In some cases partitions are logged as "[topic,partition]" and in others as 
> "topic-partition." It would be nice to standardize to make searching easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5822) Consistent logging of topic partitions

2017-09-01 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-5822:
--

Assignee: Jason Gustafson

> Consistent logging of topic partitions
> --
>
> Key: KAFKA-5822
> URL: https://issues.apache.org/jira/browse/KAFKA-5822
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>  Labels: newbie
>
> In some cases partitions are logged as "[topic,partition]" and in others as 
> "topic-partition." It would be nice to standardize to make searching easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2017-09-01 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151237#comment-16151237
 ] 

Richard Yu commented on KAFKA-4468:
---

[~bbejeck] [~guozhang] This commit seems to be ready for merging, check if it 
is suitable.

Have a nice long weekend!

> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5659) Fix error handling, efficiency issue in AdminClient#describeConfigs

2017-09-01 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-5659:
---
Summary: Fix error handling, efficiency issue in 
AdminClient#describeConfigs  (was: AdminClient#describeConfigs should handle 
broker future failures correctly)

> Fix error handling, efficiency issue in AdminClient#describeConfigs
> ---
>
> Key: KAFKA-5659
> URL: https://issues.apache.org/jira/browse/KAFKA-5659
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1
>
>
> It currently fails the "unified futures" instead of the "broker future". It 
> also makes an extra empty request when only broker info is requested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5822) Consistent logging of topic partitions

2017-09-01 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5822:
--

 Summary: Consistent logging of topic partitions
 Key: KAFKA-5822
 URL: https://issues.apache.org/jira/browse/KAFKA-5822
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


In some cases partitions are logged as "[topic,partition]" and in others as 
"topic-partition." It would be nice to standardize to make searching easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5815) Add Printed class and KStream#print(Printed)

2017-09-01 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16151043#comment-16151043
 ] 

Matthias J. Sax commented on KAFKA-5815:


PR: PR: https://github.com/apache/kafka/pull/3768

> Add Printed class and KStream#print(Printed)
> 
>
> Key: KAFKA-5815
> URL: https://issues.apache.org/jira/browse/KAFKA-5815
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> Add Printed class and KStream#print(Printed)
> deprecate all other print and writeAsText methods



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4860) Kafka batch files does not support path with spaces

2017-09-01 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4860:
---
Fix Version/s: 1.0.0

> Kafka batch files does not support path with spaces
> ---
>
> Key: KAFKA-4860
> URL: https://issues.apache.org/jira/browse/KAFKA-4860
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
> Environment: windows
>Reporter: Vladimír Kleštinec
>Priority: Minor
> Fix For: 1.0.0
>
>
> When we install kafka on windows to path that contains spaces e.g. C:\Program 
> Files\ApacheKafkabatch files located in bin/windows don't work.
> Workaround: install on path without spaces



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5821) Intermittent test failure in SaslPlainSslEndToEndAuthorizationTest.testAcls

2017-09-01 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-5821:
-

 Summary: Intermittent test failure in 
SaslPlainSslEndToEndAuthorizationTest.testAcls
 Key: KAFKA-5821
 URL: https://issues.apache.org/jira/browse/KAFKA-5821
 Project: Kafka
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


>From 
>https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/7245/testReport/junit/kafka.api/SaslPlainSslEndToEndAuthorizationTest/testAcls/
> :
{code}
java.lang.SecurityException: zookeeper.set.acl is true, but the verification of 
the JAAS login file failed.
at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
at kafka.server.KafkaServer.startup(KafkaServer.scala:192)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:134)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:94)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:93)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:93)
at 
kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:66)
at 
kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:158)
at 
kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy1.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.Reflection

[jira] [Commented] (KAFKA-5759) Allow user to specify relative path as log directory

2017-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150805#comment-16150805
 ] 

ASF GitHub Bot commented on KAFKA-5759:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3709


> Allow user to specify relative path as log directory
> 
>
> Key: KAFKA-5759
> URL: https://issues.apache.org/jira/browse/KAFKA-5759
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>Priority: Critical
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2017-09-01 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150802#comment-16150802
 ] 

Matthias J. Sax commented on KAFKA-4468:


Seems like Bill is pretty responsive on the PR. :) I can also have a look, but 
maybe not required and a committer can just do a final review and merge. Let's 
see.

> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5785) Always close connection if KafkaChannel.setSend throws exception

2017-09-01 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5785:
---
Fix Version/s: (was: 1.0.0)

> Always close connection if KafkaChannel.setSend throws exception
> 
>
> Key: KAFKA-5785
> URL: https://issues.apache.org/jira/browse/KAFKA-5785
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> The code is currently:
> {code}
> try {
> channel.setSend(send);
> } catch (CancelledKeyException e) {
> this.failedSends.add(connectionId);
> close(channel, false);
> }
> {code}
> This is generally OK, but if another exception is thrown (typically due to a 
> bug), we leak the connection.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5785) Always close connection if KafkaChannel.setSend throws exception

2017-09-01 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5785.

Resolution: Duplicate

KAFKA-5607 includes a fix for this.

> Always close connection if KafkaChannel.setSend throws exception
> 
>
> Key: KAFKA-5785
> URL: https://issues.apache.org/jira/browse/KAFKA-5785
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> The code is currently:
> {code}
> try {
> channel.setSend(send);
> } catch (CancelledKeyException e) {
> this.failedSends.add(connectionId);
> close(channel, false);
> }
> {code}
> This is generally OK, but if another exception is thrown (typically due to a 
> bug), we leak the connection.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5820) Remove unneeded synchronized keyword in StreamThread

2017-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150729#comment-16150729
 ] 

ASF GitHub Bot commented on KAFKA-5820:
---

GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/3777

KAFKA-5820 Remove unneeded synchronized keyword in StreamThread

I removed synchronized keyword from 3 methods.

I ran the change thru streams module where test suite passed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3777.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3777


commit ef79c50e5e20236c589bbea2588fa7e11a892953
Author: tedyu 
Date:   2017-09-01T15:28:14Z

KAFKA-5820 Remove unneeded synchronized keyword in StreamThread




> Remove unneeded synchronized keyword in StreamThread
> 
>
> Key: KAFKA-5820
> URL: https://issues.apache.org/jira/browse/KAFKA-5820
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> There are three methods in StreamThread which have unnecessary synchronized 
> keyword since the variable accessed, state, is volatile :
> isRunningAndNotRebalancing
> isRunning
> shutdown
> synchronized keyword can be dropped for these methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5820) Remove unneeded synchronized keyword in StreamThread

2017-09-01 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-5820:
-

 Summary: Remove unneeded synchronized keyword in StreamThread
 Key: KAFKA-5820
 URL: https://issues.apache.org/jira/browse/KAFKA-5820
 Project: Kafka
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


There are three methods in StreamThread which have unnecessary synchronized 
keyword since the variable accessed, state, is volatile :

isRunningAndNotRebalancing
isRunning
shutdown

synchronized keyword can be dropped for these methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5726) KafkaConsumer.subscribe() overload that takes just Pattern without ConsumerRebalanceListener

2017-09-01 Thread Attila Kreiner (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150708#comment-16150708
 ] 

Attila Kreiner commented on KAFKA-5726:
---

[KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-191%3A+KafkaConsumer.subscribe%28%29+overload+that+takes+just+Pattern]
 accepted.

> KafkaConsumer.subscribe() overload that takes just Pattern without 
> ConsumerRebalanceListener
> 
>
> Key: KAFKA-5726
> URL: https://issues.apache.org/jira/browse/KAFKA-5726
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Yeva Byzek
>Assignee: Attila Kreiner
>Priority: Minor
>  Labels: needs-kip, newbie, usability
>
> Request: provide {{subscribe(Pattern pattern)}} overload, similar to 
> {{subscribe(Collection topics)}}, 
> Today, for a consumer to subscribe to topics based on a regular expression 
> (i.e. {{Pattern}}), the only method option also requires to pass in a 
> {{ConsumerRebalanceListener}}. This is not user-friendly to require this 
> second argument. It seems {{new NoOpConsumerRebalanceListener()}} has to be 
> used.  
> Use case: multi datacenter, allowing easier subscription to multiple topics 
> prefixed with datacenter names, just by using a pattern subscription.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5659) AdminClient#describeConfigs should handle broker future failures correctly

2017-09-01 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5659:
---
Description: It currently fails the "unified futures" instead of the 
"broker future". It also makes an extra empty request when only broker info is 
requested.  (was: AdminClient#describeConfigs makes an extra empty request when 
only broker info is requested)

> AdminClient#describeConfigs should handle broker future failures correctly
> --
>
> Key: KAFKA-5659
> URL: https://issues.apache.org/jira/browse/KAFKA-5659
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1
>
>
> It currently fails the "unified futures" instead of the "broker future". It 
> also makes an extra empty request when only broker info is requested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5659) AdminClient#describeConfigs should handle broker future failures correctly

2017-09-01 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5659:
---
Summary: AdminClient#describeConfigs should handle broker future failures 
correctly  (was: AdminClient#describeConfigs makes an extra empty request when 
only broker info is requested)

> AdminClient#describeConfigs should handle broker future failures correctly
> --
>
> Key: KAFKA-5659
> URL: https://issues.apache.org/jira/browse/KAFKA-5659
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1
>
>
> AdminClient#describeConfigs makes an extra empty request when only broker 
> info is requested



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5659) AdminClient#describeConfigs makes an extra empty request when only broker info is requested

2017-09-01 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5659:
---
Fix Version/s: 0.11.0.1

> AdminClient#describeConfigs makes an extra empty request when only broker 
> info is requested
> ---
>
> Key: KAFKA-5659
> URL: https://issues.apache.org/jira/browse/KAFKA-5659
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1
>
>
> AdminClient#describeConfigs makes an extra empty request when only broker 
> info is requested



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5819) Add Joined class and relevant KStream join overloads

2017-09-01 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5819:
-

 Summary: Add Joined class and relevant KStream join overloads
 Key: KAFKA-5819
 URL: https://issues.apache.org/jira/browse/KAFKA-5819
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Damian Guy
Assignee: Damian Guy
 Fix For: 1.0.0


Add the {{Joined}} class as defined in KIP-182 and the following overloads to 
{{KStream}}
{code}
 KStream join(final KStream other, final ValueJoiner joiner, final JoinWindows windows, final 
Joined options);
 
 KStream join(final KTable other, final ValueJoiner joiner, final Joined options);
 
 KStream leftJoin(final KStream other, final 
ValueJoiner joiner, final JoinWindows 
windows, final Joined options);
 
 KStream leftJoin(final KTable other, final ValueJoiner joiner, final Joined options);
 
 KStream outerJoin(final KStream other, final 
ValueJoiner joiner, final JoinWindows 
windows, final Joined options);
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5819) Add Joined class and relevant KStream join overloads

2017-09-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150295#comment-16150295
 ] 

ASF GitHub Bot commented on KAFKA-5819:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3776

KAFKA-5819: Add Joined class and relevant KStream join overloads

Add the `Joined` class and the overloads to `KStream` that use it.
Deprecate existing methods that have `Serde` params

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kip-182-stream-join

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3776.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3776


commit 5be7a49c245f762479ec02c7e1625a1da882bde9
Author: Damian Guy 
Date:   2017-09-01T08:44:16Z

add Joined class and overloads for KStream#join

commit d3991b13e5fd6f9c926c1d6cf3d4e15e77eea7ba
Author: Damian Guy 
Date:   2017-09-01T09:05:39Z

KStream#leftJoin(KStream...)

commit d8b007304e462b7d0e59e3fb4d4f2fa10a78d05b
Author: Damian Guy 
Date:   2017-09-01T09:25:13Z

stream table leftJoin

commit eeed855e153df766f1341469132f45fff62a13ce
Author: Damian Guy 
Date:   2017-09-01T09:40:01Z

outerJoin




> Add Joined class and relevant KStream join overloads
> 
>
> Key: KAFKA-5819
> URL: https://issues.apache.org/jira/browse/KAFKA-5819
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 1.0.0
>
>
> Add the {{Joined}} class as defined in KIP-182 and the following overloads to 
> {{KStream}}
> {code}
>  KStream join(final KStream other, final ValueJoiner super V, ? super VO, ? extends VR> joiner, final JoinWindows windows, final 
> Joined options);
>  
>  KStream join(final KTable other, final ValueJoiner super V, ? super VT, ? extends VR> joiner, final Joined options);
>  
>  KStream leftJoin(final KStream other, final 
> ValueJoiner joiner, final JoinWindows 
> windows, final Joined options);
>  
>  KStream leftJoin(final KTable other, final 
> ValueJoiner joiner, final Joined VT> options);
>  
>  KStream outerJoin(final KStream other, final 
> ValueJoiner joiner, final JoinWindows 
> windows, final Joined options);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4327) Move Reset Tool from core to streams

2017-09-01 Thread Jorge Quilcate (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150259#comment-16150259
 ] 

Jorge Quilcate commented on KAFKA-4327:
---

Thanks [~ijuma], I will move this tool to the `streams` module.

Based on 
[KIP-171](https://cwiki.apache.org/confluence/display/KAFKA/KIP-171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+Application),
 to implement changes on this tool now we will need to call 
`ConsumerGroupCommand` to reset offsets. This would require a dependency to 
`core_main` as far as I understand. Would this be an issue?

I have a naive implementation of KIP-171 here: 
https://github.com/jeqo/kafka/commits/features/kip-171

> Move Reset Tool from core to streams
> 
>
> Key: KAFKA-4327
> URL: https://issues.apache.org/jira/browse/KAFKA-4327
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Jorge Quilcate
>Priority: Minor
>
> This is a follow up on https://issues.apache.org/jira/browse/KAFKA-4008
> Currently, Kafka Streams Application Reset Tool is part of {{core}} module 
> due to ZK dependency. After KIP-4 got merged, this dependency can be dropped 
> and the Reset Tool can be moved to {{streams}} module.
> This should also update {{InternalTopicManager#filterExistingTopics}} that 
> revers to ResetTool in an exception message:
> {{"Use 'kafka.tools.StreamsResetter' tool"}}
> -> {{"Use '" + kafka.tools.StreamsResetter.getClass().getName() + "' tool"}}
> Doing this JIRA also requires to update the docs with regard to broker 
> backward compatibility -- not all broker support "topic delete request" and 
> thus, the reset tool will not be backward compatible to all broker versions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5060) Offset not found while broker is rebuilding its index after an index corruption

2017-09-01 Thread Spiros Ioannou (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150200#comment-16150200
 ] 

Spiros Ioannou edited comment on KAFKA-5060 at 9/1/17 8:31 AM:
---

Well it seems we found the issue, we had systemd to stop kafka, and the default 
stop timeout is 90 seconds. After 90 seconds systemd kills the process with 
SIGKILL. Raising the stop timeout to 400 seconds stoped the production of such 
errors.   It seems kafka takes 3 minutes to shutdown after the initial SIGTERM, 
mostly removing fetchers from partitions. (We have 3 kafka nodes, replication 
2, 1000 partitions * 4 topics.).
Here's the working systemd unit file for reference:


{noformat}
[Unit]
Description=Kafka
After=network.target

[Service]
Type=simple
Environment="KAFKA_OPTS=-XX:ParallelGCThreads=4"
Environment="JAVA_HOME=/opt/jdk8"

#Override environment:
EnvironmentFile=/etc/sysconfig/kafka
ExecStart=/opt/kafka/bin/kafka-server-start.sh 
/opt/kafka-config-0.11/server.properties
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
TimeoutStopSec=400
PIDFile=/run/kafka.pid
Restart=on-failure
RestartSec=10
LimitNOFILE=30

[Install]
WantedBy=multi-user.target
{noformat}



was (Author: sivann):
Well it seems we found the issue, we had systemd to stop kafka, and the default 
stop timeout is 90 seconds. After 90 seconds systemd kills the process with 
SIGKILL. Raising the stop timeout to 400 seconds stoped the production of such 
errors.   It seems kafka takes 3 minutes to shutdown after the initial SIGTERM, 
mostly removing fetchers from partitions. (We have 3 kafka nodes, replication 
2, 1000 partitions * 4 topics.).
Here's the working systemd for reference:


{noformat}
[Unit]
Description=Kafka
After=network.target

[Service]
Type=simple
Environment="KAFKA_OPTS=-XX:ParallelGCThreads=4"
Environment="JAVA_HOME=/opt/jdk8"

#Override environment:
EnvironmentFile=/etc/sysconfig/kafka
ExecStart=/opt/kafka/bin/kafka-server-start.sh 
/opt/kafka-config-0.11/server.properties
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
TimeoutStopSec=400
PIDFile=/run/kafka.pid
Restart=on-failure
RestartSec=10
LimitNOFILE=30

[Install]
WantedBy=multi-user.target
{noformat}


> Offset not found while broker is rebuilding its index after an index 
> corruption
> ---
>
> Key: KAFKA-5060
> URL: https://issues.apache.org/jira/browse/KAFKA-5060
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.1.0
>Reporter: Romaric Parmentier
>Priority: Critical
>
> After rebooting our kafka servers to change a configuration, one of my 
> consumers running old consumer has fail to find a new leader for a period of 
> 15 minutes. The topic has a replication factor of 2.
> When the spare server has finally been found and elected leader, the previous 
> consumed offset was not able to be found because the broker was rebuilding 
> index. 
> So my consumer has decided to follow the configuration auto.offset.reset 
> which is pretty bad because the offset will exist 2 minutes later:
> 2017-04-12 14:59:08,568] WARN Found a corrupted index file due to requirement 
> failed: Corrupt index found, index file 
> (/var/lib/kafka/my_topic-6/130248110337.index) has non-zero size but 
> the last offset is 130248110337 which is no larger than the base offset 
> 130248110337.}. deleting 
> /var/lib/kafka/my_topic-6/130248110337.timeindex, 
> /var/lib/kafka/my_topic-6/130248110337.index and rebuilding index... 
> (kafka.log.Log)
> [2017-04-12 15:01:41,490] INFO Completed load of log my_topic-6 with 6146 log 
> segments and log end offset 130251895436 in 169696 ms (kafka.log.Log)
> Maybe it is handled by the new consumer or there is a some configuration to 
> handle this case but I didn't find anything



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5060) Offset not found while broker is rebuilding its index after an index corruption

2017-09-01 Thread Spiros Ioannou (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150200#comment-16150200
 ] 

Spiros Ioannou edited comment on KAFKA-5060 at 9/1/17 8:31 AM:
---

Well it seems we found the issue, we had systemd to stop kafka, and the default 
stop timeout is 90 seconds. After 90 seconds systemd kills the process with 
SIGKILL. Raising the stop timeout to 400 seconds stoped the production of such 
errors.   It seems kafka takes 3 minutes to shutdown after the initial SIGTERM, 
mostly removing fetchers from partitions. (We have 3 kafka nodes, replication 
2, 1000 partitions * 4 topics.).
Here's the working systemd for reference:


{noformat}
[Unit]
Description=Kafka
After=network.target

[Service]
Type=simple
Environment="KAFKA_OPTS=-XX:ParallelGCThreads=4"
Environment="JAVA_HOME=/opt/jdk8"

#Override environment:
EnvironmentFile=/etc/sysconfig/kafka
ExecStart=/opt/kafka/bin/kafka-server-start.sh 
/opt/kafka-config-0.11/server.properties
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
TimeoutStopSec=400
PIDFile=/run/kafka.pid
Restart=on-failure
RestartSec=10
LimitNOFILE=30

[Install]
WantedBy=multi-user.target
{noformat}



was (Author: sivann):
Well it seems we found the issue, we had systemd to stop kafka, and the default 
stop timeout is 90 seconds. After 90 seconds systemd kills the process with 
SIGKILL. Raising the stop timeout to 400 seconds stoped the production of such 
errors.   It seems kafka takes 3 minutes to shutdown after the initial SIGTERM, 
mostly removing fetchers from partitions. (We have 3 kafka nodes, replication 
2, 1000 partitions * 4 topics.).

> Offset not found while broker is rebuilding its index after an index 
> corruption
> ---
>
> Key: KAFKA-5060
> URL: https://issues.apache.org/jira/browse/KAFKA-5060
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.1.0
>Reporter: Romaric Parmentier
>Priority: Critical
>
> After rebooting our kafka servers to change a configuration, one of my 
> consumers running old consumer has fail to find a new leader for a period of 
> 15 minutes. The topic has a replication factor of 2.
> When the spare server has finally been found and elected leader, the previous 
> consumed offset was not able to be found because the broker was rebuilding 
> index. 
> So my consumer has decided to follow the configuration auto.offset.reset 
> which is pretty bad because the offset will exist 2 minutes later:
> 2017-04-12 14:59:08,568] WARN Found a corrupted index file due to requirement 
> failed: Corrupt index found, index file 
> (/var/lib/kafka/my_topic-6/130248110337.index) has non-zero size but 
> the last offset is 130248110337 which is no larger than the base offset 
> 130248110337.}. deleting 
> /var/lib/kafka/my_topic-6/130248110337.timeindex, 
> /var/lib/kafka/my_topic-6/130248110337.index and rebuilding index... 
> (kafka.log.Log)
> [2017-04-12 15:01:41,490] INFO Completed load of log my_topic-6 with 6146 log 
> segments and log end offset 130251895436 in 169696 ms (kafka.log.Log)
> Maybe it is handled by the new consumer or there is a some configuration to 
> handle this case but I didn't find anything



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5060) Offset not found while broker is rebuilding its index after an index corruption

2017-09-01 Thread Romaric Parmentier (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150201#comment-16150201
 ] 

Romaric Parmentier commented on KAFKA-5060:
---

Thank you for your email. I’m out of the office and will be back on September 
11. During this period I will have limited access to my email.
For immediate assistance please contact Yohan Sanchez (ysanc...@freewheel.tv) 
or Antoine Bonavita (abonav...@freewheel.tv)

Best Regards,

Romaric


> Offset not found while broker is rebuilding its index after an index 
> corruption
> ---
>
> Key: KAFKA-5060
> URL: https://issues.apache.org/jira/browse/KAFKA-5060
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.1.0
>Reporter: Romaric Parmentier
>Priority: Critical
>
> After rebooting our kafka servers to change a configuration, one of my 
> consumers running old consumer has fail to find a new leader for a period of 
> 15 minutes. The topic has a replication factor of 2.
> When the spare server has finally been found and elected leader, the previous 
> consumed offset was not able to be found because the broker was rebuilding 
> index. 
> So my consumer has decided to follow the configuration auto.offset.reset 
> which is pretty bad because the offset will exist 2 minutes later:
> 2017-04-12 14:59:08,568] WARN Found a corrupted index file due to requirement 
> failed: Corrupt index found, index file 
> (/var/lib/kafka/my_topic-6/130248110337.index) has non-zero size but 
> the last offset is 130248110337 which is no larger than the base offset 
> 130248110337.}. deleting 
> /var/lib/kafka/my_topic-6/130248110337.timeindex, 
> /var/lib/kafka/my_topic-6/130248110337.index and rebuilding index... 
> (kafka.log.Log)
> [2017-04-12 15:01:41,490] INFO Completed load of log my_topic-6 with 6146 log 
> segments and log end offset 130251895436 in 169696 ms (kafka.log.Log)
> Maybe it is handled by the new consumer or there is a some configuration to 
> handle this case but I didn't find anything



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5060) Offset not found while broker is rebuilding its index after an index corruption

2017-09-01 Thread Spiros Ioannou (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150200#comment-16150200
 ] 

Spiros Ioannou commented on KAFKA-5060:
---

Well it seems we found the issue, we had systemd to stop kafka, and the default 
stop timeout is 90 seconds. After 90 seconds systemd kills the process with 
SIGKILL. Raising the stop timeout to 400 seconds stoped the production of such 
errors.   It seems kafka takes 3 minutes to shutdown after the initial SIGTERM, 
mostly removing fetchers from partitions. (We have 3 kafka nodes, replication 
2, 1000 partitions * 4 topics.).

> Offset not found while broker is rebuilding its index after an index 
> corruption
> ---
>
> Key: KAFKA-5060
> URL: https://issues.apache.org/jira/browse/KAFKA-5060
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.1.0
>Reporter: Romaric Parmentier
>Priority: Critical
>
> After rebooting our kafka servers to change a configuration, one of my 
> consumers running old consumer has fail to find a new leader for a period of 
> 15 minutes. The topic has a replication factor of 2.
> When the spare server has finally been found and elected leader, the previous 
> consumed offset was not able to be found because the broker was rebuilding 
> index. 
> So my consumer has decided to follow the configuration auto.offset.reset 
> which is pretty bad because the offset will exist 2 minutes later:
> 2017-04-12 14:59:08,568] WARN Found a corrupted index file due to requirement 
> failed: Corrupt index found, index file 
> (/var/lib/kafka/my_topic-6/130248110337.index) has non-zero size but 
> the last offset is 130248110337 which is no larger than the base offset 
> 130248110337.}. deleting 
> /var/lib/kafka/my_topic-6/130248110337.timeindex, 
> /var/lib/kafka/my_topic-6/130248110337.index and rebuilding index... 
> (kafka.log.Log)
> [2017-04-12 15:01:41,490] INFO Completed load of log my_topic-6 with 6146 log 
> segments and log end offset 130251895436 in 169696 ms (kafka.log.Log)
> Maybe it is handled by the new consumer or there is a some configuration to 
> handle this case but I didn't find anything



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4264) kafka-server-stop.sh fails is Kafka launched via kafka-server-start.sh

2017-09-01 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4264.
--
Resolution: Duplicate

This is related to KAFKA-4931. PR is available for KAFKA-4931. 

> kafka-server-stop.sh fails is Kafka launched via kafka-server-start.sh
> --
>
> Key: KAFKA-4264
> URL: https://issues.apache.org/jira/browse/KAFKA-4264
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.10.0.1
> Environment: Tested in Debian Jessy
>Reporter: Alex Schmitz
>Priority: Trivial
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> kafka-server-stop.sh greps for the process ID to kill with the following: 
> bq. PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk 
> '{print $1}')
> However, if Kafka is launched via the kafka-server-start.sh script, the 
> process doesn't include kafka.Kafka, the grep fails to find the process, and 
> it returns the failure message, No Kafka server to stop. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4763) Handle disk failure for JBOD (KIP-112)

2017-09-01 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150165#comment-16150165
 ] 

Genmao Yu commented on KAFKA-4763:
--

Please feel free to give feedback whenever you have time.  What confused me is 
why do you think it is wrong to re-create those lost replicas on a good log 
directory. :)

> Handle disk failure for JBOD (KIP-112)
> --
>
> Key: KAFKA-4763
> URL: https://issues.apache.org/jira/browse/KAFKA-4763
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.0.0
>
>
> See 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
>  for motivation and design.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4763) Handle disk failure for JBOD (KIP-112)

2017-09-01 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150161#comment-16150161
 ] 

Dong Lin commented on KAFKA-4763:
-

And BTW, the current design already does what you suggested -- if you have 
removed the failed log directory from the log.dirs, the replica will be 
re-created on the good disks.

> Handle disk failure for JBOD (KIP-112)
> --
>
> Key: KAFKA-4763
> URL: https://issues.apache.org/jira/browse/KAFKA-4763
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.0.0
>
>
> See 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
>  for motivation and design.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5766) Very high CPU-load of consumer when broker is down

2017-09-01 Thread Sebastian Bernauer (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150159#comment-16150159
 ] 

Sebastian Bernauer commented on KAFKA-5766:
---

I've managed to get our application running with spring-kafka at version 
2.0.0.M3.
This version uses the kafka-clients at version 0.11.0.0, where our problem was 
fixed.

Thanks for your support!

> Very high CPU-load of consumer when broker is down
> --
>
> Key: KAFKA-5766
> URL: https://issues.apache.org/jira/browse/KAFKA-5766
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Sebastian Bernauer
>
> Hi,
> i have a single broker instance at localhost.
> I set up a Consumer with the following code:
> {code:java}
> Map configs = new HashMap<>();
> configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "localhost:9092");
> configs.put(ConsumerConfig.GROUP_ID_CONFIG, "gh399");
> configs.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
> StringDeserializer.class);
> configs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
> StringDeserializer.class);
> KafkaConsumer consumer = new KafkaConsumer<>(configs);
> consumer.assign(Collections.singletonList(new TopicPartition("foo", 
> 0)));
> while (true) {
> ConsumerRecords records = consumer.poll(1000);
> System.out.println(records.count());
> }
> {code}
> This works all fine, until i shut down the broker.
> If i do so, it causes a 100% CPU-load of my application.
> After starting the broker again the usage decreases back to a normal level. 
> It would be very nice if you could help me!
> Thanks,
> Sebastian
> Spring-Kafka: 2.0.0.M3
> Kafka: 0.10.2.0
> JDK: 1.8.0_121



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4763) Handle disk failure for JBOD (KIP-112)

2017-09-01 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150157#comment-16150157
 ] 

Dong Lin commented on KAFKA-4763:
-

[~uncleGen] Sorry, I am not ready to start a long discussion here because I am 
currently working on something. If you think it is worth doing, please feel 
free to open a KIP and we can discuss there. Thanks!

> Handle disk failure for JBOD (KIP-112)
> --
>
> Key: KAFKA-4763
> URL: https://issues.apache.org/jira/browse/KAFKA-4763
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.0.0
>
>
> See 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
>  for motivation and design.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4337) Create topic in multiple zookeepers with Kafka AdminUtils.CreateTopic JAVA API in Kafka 0.9.0.1 gives Error (Topic gets created in only one of the specified zookeeper)

2017-09-01 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4337.
--
Resolution: Won't Fix

> Create topic in multiple zookeepers with Kafka AdminUtils.CreateTopic JAVA 
> API in Kafka 0.9.0.1 gives Error (Topic gets created in only one of the 
> specified zookeeper)
> ---
>
> Key: KAFKA-4337
> URL: https://issues.apache.org/jira/browse/KAFKA-4337
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: Bharat Patel
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  Want to use below code snippet to create topic in multiple zookeepers with 
> Kafka Java APIS. when I specify 2 zookeeprs IPs in zookeeperConnect variable 
> it only creates topic in anyone of zookeeper. Both the zookeeper are 2 
> different kafka clusters.
>  String zookeeperConnect = zookeeperIPs; // Multiple zookeeper IPs
>int sessionTimeoutMs = 10 * 1000;
>int connectionTimeoutMs = 8 * 1000;
>try {
> ZkClient zkClient = new ZkClient(
> zookeeperConnect,
> sessionTimeoutMs,
> connectionTimeoutMs,
> ZKStringSerializer$.MODULE$);
> boolean isSecureKafkaCluster = false;
> ZkUtils zkUtils = new ZkUtils(zkClient, 
> new ZkConnection(zookeeperConnect), isSecureKafkaCluster);
>  String topic1 = "nameofTopictobeCreated";
>  int partitions = 1;
>  int replication = 1;
>  Properties topicConfig = new Properties(); // add per-topic 
> configurations settings here
>  AdminUtils.createTopic(zkUtils, topic1, partitions, replication, 
> topicConfig);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4337) Create topic in multiple zookeepers with Kafka AdminUtils.CreateTopic JAVA API in Kafka 0.9.0.1 gives Error (Topic gets created in only one of the specified zookeeper)

2017-09-01 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150144#comment-16150144
 ] 

Manikumar commented on KAFKA-4337:
--

As mentioned by [~ijuma], create command creates a topic using ZK Cluster for a 
particular Kafka Cluster. So we should pass host:port pairs of same ZK Cluster.

> Create topic in multiple zookeepers with Kafka AdminUtils.CreateTopic JAVA 
> API in Kafka 0.9.0.1 gives Error (Topic gets created in only one of the 
> specified zookeeper)
> ---
>
> Key: KAFKA-4337
> URL: https://issues.apache.org/jira/browse/KAFKA-4337
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: Bharat Patel
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  Want to use below code snippet to create topic in multiple zookeepers with 
> Kafka Java APIS. when I specify 2 zookeeprs IPs in zookeeperConnect variable 
> it only creates topic in anyone of zookeeper. Both the zookeeper are 2 
> different kafka clusters.
>  String zookeeperConnect = zookeeperIPs; // Multiple zookeeper IPs
>int sessionTimeoutMs = 10 * 1000;
>int connectionTimeoutMs = 8 * 1000;
>try {
> ZkClient zkClient = new ZkClient(
> zookeeperConnect,
> sessionTimeoutMs,
> connectionTimeoutMs,
> ZKStringSerializer$.MODULE$);
> boolean isSecureKafkaCluster = false;
> ZkUtils zkUtils = new ZkUtils(zkClient, 
> new ZkConnection(zookeeperConnect), isSecureKafkaCluster);
>  String topic1 = "nameofTopictobeCreated";
>  int partitions = 1;
>  int replication = 1;
>  Properties topicConfig = new Properties(); // add per-topic 
> configurations settings here
>  AdminUtils.createTopic(zkUtils, topic1, partitions, replication, 
> topicConfig);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4763) Handle disk failure for JBOD (KIP-112)

2017-09-01 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150143#comment-16150143
 ] 

Genmao Yu commented on KAFKA-4763:
--

Let me be clear: "remaining disk" means "remaining usable disks", i.e. I have 
more than one disk. If one disk break down, we can exclude from "log.dirs" in 
broker config and then restart broker. So, I think it is reasonable to recover 
lost replica (assume remaining disks is enough to cover lost replicas). What 
your opinion? 

> Handle disk failure for JBOD (KIP-112)
> --
>
> Key: KAFKA-4763
> URL: https://issues.apache.org/jira/browse/KAFKA-4763
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.0.0
>
>
> See 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
>  for motivation and design.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)