[jira] [Resolved] (KAFKA-5123) Refactor ZkUtils readData* methods

2017-12-14 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar resolved KAFKA-5123.
--
Resolution: Won't Fix

> Refactor ZkUtils readData* methods 
> ---
>
> Key: KAFKA-5123
> URL: https://issues.apache.org/jira/browse/KAFKA-5123
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
> Fix For: 1.1.0
>
>
> Usually only the data value is required but every readData method in the 
> ZkUtils returns a Tuple with the data and the stat.
> https://github.com/apache/kafka/pull/2888



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5102) Refactor core components to use zkUtils methods instead of zkUtils.zkClient

2017-08-04 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar resolved KAFKA-5102.
--
Resolution: Fixed

> Refactor core components to use zkUtils methods instead of zkUtils.zkClient
> ---
>
> Key: KAFKA-5102
> URL: https://issues.apache.org/jira/browse/KAFKA-5102
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Refactor every create*, update*, getSequenceId methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5123) Refactor ZkUtils readData* methods

2017-06-15 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5123:
-
Fix Version/s: 0.11.1.0

> Refactor ZkUtils readData* methods 
> ---
>
> Key: KAFKA-5123
> URL: https://issues.apache.org/jira/browse/KAFKA-5123
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> Usually only the data value is required but every readData method in the 
> ZkUtils returns a Tuple with the data and the stat.
> https://github.com/apache/kafka/pull/2888



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5388) Replace zkClient.subscribe*Changes method with an equivalent zkUtils method

2017-06-15 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5388:
-
Fix Version/s: 0.11.1.0

> Replace zkClient.subscribe*Changes method with an equivalent zkUtils method
> ---
>
> Key: KAFKA-5388
> URL: https://issues.apache.org/jira/browse/KAFKA-5388
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
> Fix For: 0.11.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5127) Replace pattern matching with foreach where the case None is unused

2017-06-15 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5127:
-
Fix Version/s: 0.11.1.0

> Replace pattern matching with foreach where the case None is unused 
> 
>
> Key: KAFKA-5127
> URL: https://issues.apache.org/jira/browse/KAFKA-5127
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> There are various place where pattern matching is used with matching only for 
> one thing and ignoring the None type, this can be replaced with foreach.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5134) Replace zkClient.getChildren method with zkUtils.getChildren

2017-06-15 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5134:
-
Fix Version/s: 0.11.1.0

> Replace zkClient.getChildren method with zkUtils.getChildren
> 
>
> Key: KAFKA-5134
> URL: https://issues.apache.org/jira/browse/KAFKA-5134
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
> Fix For: 0.11.1.0
>
>
> Refactor related classes to use the zkUtils.getChildren method intead of the 
> zkClient variant.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2017-06-15 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-4307:
-
Fix Version/s: 0.11.1.0

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.11.1.0
>
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5123) Refactor ZkUtils readData* methods

2017-06-13 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047649#comment-16047649
 ] 

Balint Molnar commented on KAFKA-5123:
--

[~ijuma] what do you think about this?

> Refactor ZkUtils readData* methods 
> ---
>
> Key: KAFKA-5123
> URL: https://issues.apache.org/jira/browse/KAFKA-5123
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
>
> Usually only the data value is required but every readData method in the 
> ZkUtils returns a Tuple with the data and the stat.
> https://github.com/apache/kafka/pull/2888



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5388) Replace zkClient.subscribe*Changes method with an equivalent zkUtils method

2017-06-09 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5388:
-
Status: Patch Available  (was: In Progress)

> Replace zkClient.subscribe*Changes method with an equivalent zkUtils method
> ---
>
> Key: KAFKA-5388
> URL: https://issues.apache.org/jira/browse/KAFKA-5388
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5388) Replace zkClient.subscribe*Changes method with an equivalent zkUtils method

2017-06-07 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5388 started by Balint Molnar.

> Replace zkClient.subscribe*Changes method with an equivalent zkUtils method
> ---
>
> Key: KAFKA-5388
> URL: https://issues.apache.org/jira/browse/KAFKA-5388
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5391) Replace zkClient.delete* method with an equivalent zkUtils method

2017-06-06 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5391:
-
Status: Patch Available  (was: Open)

> Replace zkClient.delete* method with an equivalent zkUtils method
> -
>
> Key: KAFKA-5391
> URL: https://issues.apache.org/jira/browse/KAFKA-5391
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5389) Replace zkClient.exists method with zkUtils.pathExists

2017-06-06 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5389:
-
Status: Patch Available  (was: In Progress)

> Replace zkClient.exists method with zkUtils.pathExists
> --
>
> Key: KAFKA-5389
> URL: https://issues.apache.org/jira/browse/KAFKA-5389
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5389) Replace zkClient.exists method with zkUtils.pathExists

2017-06-06 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5389 started by Balint Molnar.

> Replace zkClient.exists method with zkUtils.pathExists
> --
>
> Key: KAFKA-5389
> URL: https://issues.apache.org/jira/browse/KAFKA-5389
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5391) Replace zkClient.delete* method with an equivalent zkUtils method

2017-06-06 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5391:


 Summary: Replace zkClient.delete* method with an equivalent 
zkUtils method
 Key: KAFKA-5391
 URL: https://issues.apache.org/jira/browse/KAFKA-5391
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Balint Molnar
Assignee: Balint Molnar






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5389) Replace zkClient.exists method with zkUtils.pathExists

2017-06-06 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5389:


 Summary: Replace zkClient.exists method with zkUtils.pathExists
 Key: KAFKA-5389
 URL: https://issues.apache.org/jira/browse/KAFKA-5389
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Balint Molnar
Assignee: Balint Molnar






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5388) Replace zkClient.subscribe*Changes method with an equivalent zkUtils method

2017-06-06 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5388:


 Summary: Replace zkClient.subscribe*Changes method with an 
equivalent zkUtils method
 Key: KAFKA-5388
 URL: https://issues.apache.org/jira/browse/KAFKA-5388
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Balint Molnar
Assignee: Balint Molnar






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5184) Transient failure: MultipleListenersWithAdditionalJaasContextTest.testProduceConsume

2017-06-06 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar resolved KAFKA-5184.
--
Resolution: Fixed

Reopen if needed.

> Transient failure: 
> MultipleListenersWithAdditionalJaasContextTest.testProduceConsume
> 
>
> Key: KAFKA-5184
> URL: https://issues.apache.org/jira/browse/KAFKA-5184
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Balint Molnar
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/3574/testReport/junit/kafka.server/MultipleListenersWithAdditionalJaasContextTest/testProduceConsume/
> {code}
> Error Message
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
> Stacktrace
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:311)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:811)
>   at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:857)
>   at kafka.utils.TestUtils$.$anonfun$createTopic$1(TestUtils.scala:254)
>   at 
> kafka.utils.TestUtils$.$anonfun$createTopic$1$adapted(TestUtils.scala:253)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
>   at scala.collection.immutable.Range.foreach(Range.scala:156)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:234)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:253)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3(MultipleListenersWithSameSecurityProtocolBaseTest.scala:109)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3$adapted(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.setUp(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> 

[jira] [Commented] (KAFKA-5123) Refactor ZkUtils readData* methods

2017-06-02 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034575#comment-16034575
 ] 

Balint Molnar commented on KAFKA-5123:
--

How to refactor zkUtils.readData method:
* move all zkException inside ZkUtils
** Class ConsumerGroupCommand:
*** 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L342
*** It is easy because we can match for None and then call the printerror(…)
** Class ConsumerOffsetChecker:
*** 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L172
*** Maybe we can throw a different exception here/ introduce new one which is 
not related to zk.
** Class ZkUtils:
*** 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala#L496
*** We can handle this with None
* this method should only return the data without the stat and introduce a new 
method with name readDataAndStat which returns the data and the stat too, with 
this one we don’t need to call the annoying ._1 every time. 

How to refactor zkUtils.readDataMaybeNull method:
* Do not return Some(null) convert Some(null) into None with calling Option() 
instead of Some(). Also rename this method to readData. I do not see why we 
need a separate method for these two things.
** This method is a little bit tricky because of the Some() no one traits the 
Some(null)
** For example:
*** Class ConsumerGroupCommand:
 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L272
 we are calling a substring on null string
*** Class ZookeeperConsumerConnector:
 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala#L422
 we are calling toLong on null string
* This method should only return the data without the stat and introduce a new 
method with name readDataAndStat. 

How to refactor zkUtils.readDataAndVersionMaybeNull method:
* I think we can remove the MaybeNull from it’s name, otherwise this method 
looks ok to me.

[~ijuma] what do you think? If you agree, I will start to implement this.

> Refactor ZkUtils readData* methods 
> ---
>
> Key: KAFKA-5123
> URL: https://issues.apache.org/jira/browse/KAFKA-5123
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
>
> Usually only the data value is required but every readData method in the 
> ZkUtils returns a Tuple with the data and the stat.
> https://github.com/apache/kafka/pull/2888



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-3151) kafka-consumer-groups.sh fail with sasl enabled

2017-05-30 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar resolved KAFKA-3151.
--
Resolution: Not A Problem

> kafka-consumer-groups.sh fail with sasl enabled 
> 
>
> Key: KAFKA-3151
> URL: https://issues.apache.org/jira/browse/KAFKA-3151
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: redhat as6.5
>Reporter: linbao111
>
> ./bin/kafka-consumer-groups.sh --new-consumer  --bootstrap-server 
> slave1.otocyon.com:9092 --list
> Error while executing consumer group command Request METADATA failed on 
> brokers List(Node(-1, slave1.otocyon.com, 9092))
> java.lang.RuntimeException: Request METADATA failed on brokers List(Node(-1, 
> slave1.otocyon.com, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findAllBrokers(AdminClient.scala:93)
> at kafka.admin.AdminClient.listAllGroups(AdminClient.scala:101)
> at 
> kafka.admin.AdminClient.listAllGroupsFlattened(AdminClient.scala:122)
> at 
> kafka.admin.AdminClient.listAllConsumerGroupsFlattened(AdminClient.scala:126)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.list(ConsumerGroupCommand.scala:310)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:61)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> same error for:
> bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand  --bootstrap-server 
> slave16:9092,app:9092 --describe --group test-consumer-group  --new-consumer
> Error while executing consumer group command Request GROUP_COORDINATOR failed 
> on brokers List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> java.lang.RuntimeException: Request GROUP_COORDINATOR failed on brokers 
> List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findCoordinator(AdminClient.scala:78)
> at kafka.admin.AdminClient.describeGroup(AdminClient.scala:130)
> at 
> kafka.admin.AdminClient.describeConsumerGroup(AdminClient.scala:152)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:314)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:84)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describe(ConsumerGroupCommand.scala:302)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:63)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5134) Replace zkClient.getChildren method with zkUtils.getChildren

2017-05-22 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5134:
-
Status: Patch Available  (was: In Progress)

> Replace zkClient.getChildren method with zkUtils.getChildren
> 
>
> Key: KAFKA-5134
> URL: https://issues.apache.org/jira/browse/KAFKA-5134
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Refactor related classes to use the zkUtils.getChildren method intead of the 
> zkClient variant.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5184) Transient failure: MultipleListenersWithAdditionalJaasContextTest.testProduceConsume

2017-05-08 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-5184:


Assignee: Balint Molnar

> Transient failure: 
> MultipleListenersWithAdditionalJaasContextTest.testProduceConsume
> 
>
> Key: KAFKA-5184
> URL: https://issues.apache.org/jira/browse/KAFKA-5184
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Balint Molnar
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/3574/testReport/junit/kafka.server/MultipleListenersWithAdditionalJaasContextTest/testProduceConsume/
> {code}
> Error Message
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
> Stacktrace
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:311)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:811)
>   at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:857)
>   at kafka.utils.TestUtils$.$anonfun$createTopic$1(TestUtils.scala:254)
>   at 
> kafka.utils.TestUtils$.$anonfun$createTopic$1$adapted(TestUtils.scala:253)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
>   at scala.collection.immutable.Range.foreach(Range.scala:156)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:234)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:253)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3(MultipleListenersWithSameSecurityProtocolBaseTest.scala:109)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3$adapted(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.setUp(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> 

[jira] [Comment Edited] (KAFKA-5173) SASL tests failing with Could not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the JAAS configuration

2017-05-05 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15998175#comment-15998175
 ] 

Balint Molnar edited comment on KAFKA-5173 at 5/5/17 11:23 AM:
---

[~rsivaram][~ijuma] maybe we can log the jaasContext before writing it to file, 
in method {code}SaslSetup#writeJaasConfigurationToFile{code}


was (Author: baluchicken):
[~rsivaram][~ijuma] maybe we can log the jaasContext before writing it to file, 
in method {code}SaslSetup#writeJaasConfigurationToFile{code}.

> SASL tests failing with Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration
> --
>
> Key: KAFKA-5173
> URL: https://issues.apache.org/jira/browse/KAFKA-5173
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> I've seen this a few times. One example:
> {code}
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is /tmp/kafka8162725028002772063.tmp
>   at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:78)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:100)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:73)
>   at kafka.network.Processor.(SocketServer.scala:423)
>   at kafka.network.SocketServer.newProcessor(SocketServer.scala:145)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at kafka.network.SocketServer.startup(SocketServer.scala:90)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:218)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.BaseTopicMetadataTest.setUp(BaseTopicMetadataTest.scala:51)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$setUp(SaslPlaintextTopicMetadataTest.scala:23)
>   at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:31)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.setUp(SaslPlaintextTopicMetadataTest.scala:23)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk8/1479/testReport/junit/kafka.integration/SaslPlaintextTopicMetadataTest/testIsrAfterBrokerShutDownAndJoinsBack/
> [~rsivaram], any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5173) SASL tests failing with Could not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the JAAS configuration

2017-05-05 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15998175#comment-15998175
 ] 

Balint Molnar commented on KAFKA-5173:
--

[~rsivaram][~ijuma] maybe we can log the jaasContext before writing it to file, 
in method {code}SaslSetup#writeJaasConfigurationToFile{code}.

> SASL tests failing with Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration
> --
>
> Key: KAFKA-5173
> URL: https://issues.apache.org/jira/browse/KAFKA-5173
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> I've seen this a few times. One example:
> {code}
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is /tmp/kafka8162725028002772063.tmp
>   at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:78)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:100)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:73)
>   at kafka.network.Processor.(SocketServer.scala:423)
>   at kafka.network.SocketServer.newProcessor(SocketServer.scala:145)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at kafka.network.SocketServer.startup(SocketServer.scala:90)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:218)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.BaseTopicMetadataTest.setUp(BaseTopicMetadataTest.scala:51)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$setUp(SaslPlaintextTopicMetadataTest.scala:23)
>   at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:31)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.setUp(SaslPlaintextTopicMetadataTest.scala:23)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk8/1479/testReport/junit/kafka.integration/SaslPlaintextTopicMetadataTest/testIsrAfterBrokerShutDownAndJoinsBack/
> [~rsivaram], any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5173) SASL tests failing with Could not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the JAAS configuration

2017-05-04 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15997134#comment-15997134
 ] 

Balint Molnar commented on KAFKA-5173:
--

I reran this test 50 times in my computer no failure happened. I also checked 
my code but no idea so far if this one is related to KAFKA-4703.

> SASL tests failing with Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration
> --
>
> Key: KAFKA-5173
> URL: https://issues.apache.org/jira/browse/KAFKA-5173
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> I've seen this a few times. One example:
> {code}
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is /tmp/kafka8162725028002772063.tmp
>   at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:78)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:100)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:73)
>   at kafka.network.Processor.(SocketServer.scala:423)
>   at kafka.network.SocketServer.newProcessor(SocketServer.scala:145)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at kafka.network.SocketServer.startup(SocketServer.scala:90)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:218)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.BaseTopicMetadataTest.setUp(BaseTopicMetadataTest.scala:51)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$setUp(SaslPlaintextTopicMetadataTest.scala:23)
>   at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:31)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.setUp(SaslPlaintextTopicMetadataTest.scala:23)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk8/1479/testReport/junit/kafka.integration/SaslPlaintextTopicMetadataTest/testIsrAfterBrokerShutDownAndJoinsBack/
> [~rsivaram], any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5127) Replace pattern matching with foreach where the case None is unused

2017-05-02 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5127:
-
Status: Patch Available  (was: In Progress)

> Replace pattern matching with foreach where the case None is unused 
> 
>
> Key: KAFKA-5127
> URL: https://issues.apache.org/jira/browse/KAFKA-5127
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
>
> There are various place where pattern matching is used with matching only for 
> one thing and ignoring the None type, this can be replaced with foreach.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2017-05-02 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-4307:
-
Status: Patch Available  (was: In Progress)

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Balint Molnar
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4703) Test with two SASL_SSL listeners with different JAAS contexts

2017-05-02 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-4703:
-
Status: Patch Available  (was: In Progress)

> Test with two SASL_SSL listeners with different JAAS contexts
> -
>
> Key: KAFKA-4703
> URL: https://issues.apache.org/jira/browse/KAFKA-4703
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Assignee: Balint Molnar
>  Labels: newbie
>
> [~rsivaram] suggested the following in 
> https://github.com/apache/kafka/pull/2406
> {quote}
> I think this feature allows two SASL_SSL listeners, one for external and one 
> for internal and the two can use different mechanisms and different JAAS 
> contexts. That makes the multi-mechanism configuration neater. I think it 
> will be useful to have an integration test for this, perhaps change 
> SaslMultiMechanismConsumerTest.
> {quote}
> And my reply:
> {quote}
> I think it's a bit tricky to support multiple listeners in 
> KafkaServerTestHarness. Maybe it's easier to do the test you suggest in 
> MultipleListenersWithSameSecurityProtocolTest.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5134) Replace zkClient.getChildren method with zkUtils.getChildren

2017-04-27 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5134 started by Balint Molnar.

> Replace zkClient.getChildren method with zkUtils.getChildren
> 
>
> Key: KAFKA-5134
> URL: https://issues.apache.org/jira/browse/KAFKA-5134
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Refactor related classes to use the zkUtils.getChildren method intead of the 
> zkClient variant.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5134) Replace zkClient.getChildren method with zkUtils.getChildren

2017-04-27 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5134:


 Summary: Replace zkClient.getChildren method with 
zkUtils.getChildren
 Key: KAFKA-5134
 URL: https://issues.apache.org/jira/browse/KAFKA-5134
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Balint Molnar
Assignee: Balint Molnar


Refactor related classes to use the zkUtils.getChildren method intead of the 
zkClient variant.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5127) Replace pattern matching with foreach where the case None is unused

2017-04-26 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5127:


 Summary: Replace pattern matching with foreach where the case None 
is unused 
 Key: KAFKA-5127
 URL: https://issues.apache.org/jira/browse/KAFKA-5127
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Balint Molnar
Assignee: Balint Molnar
Priority: Minor


There are various place where pattern matching is used with matching only for 
one thing and ignoring the None type, this can be replaced with foreach.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5127) Replace pattern matching with foreach where the case None is unused

2017-04-26 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5127 started by Balint Molnar.

> Replace pattern matching with foreach where the case None is unused 
> 
>
> Key: KAFKA-5127
> URL: https://issues.apache.org/jira/browse/KAFKA-5127
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
>
> There are various place where pattern matching is used with matching only for 
> one thing and ignoring the None type, this can be replaced with foreach.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5123) Refactor ZkUtils readData* methods

2017-04-25 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5123 started by Balint Molnar.

> Refactor ZkUtils readData* methods 
> ---
>
> Key: KAFKA-5123
> URL: https://issues.apache.org/jira/browse/KAFKA-5123
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
>
> Usually only the data value is required but every readData method in the 
> ZkUtils returns a Tuple with the data and the stat.
> https://github.com/apache/kafka/pull/2888



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5123) Refactor ZkUtils readData* methods

2017-04-25 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5123:


 Summary: Refactor ZkUtils readData* methods 
 Key: KAFKA-5123
 URL: https://issues.apache.org/jira/browse/KAFKA-5123
 Project: Kafka
  Issue Type: Bug
Reporter: Balint Molnar
Assignee: Balint Molnar
Priority: Minor


Usually only the data value is required but every readData method in the 
ZkUtils returns a Tuple with the data and the stat.

https://github.com/apache/kafka/pull/2888



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5103) Refactor AdminUtils to use zkUtils methods instad of zkUtils.zkClient

2017-04-21 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5103:


 Summary: Refactor AdminUtils to use zkUtils methods instad of 
zkUtils.zkClient
 Key: KAFKA-5103
 URL: https://issues.apache.org/jira/browse/KAFKA-5103
 Project: Kafka
  Issue Type: Sub-task
  Components: admin
Reporter: Balint Molnar
Assignee: Balint Molnar


Replace zkUtils.zkClient.createPersistentSequential(seqNode, content) to 
zkUtils.createSequentialPersistentPath(seqNode, content).
The zkClient variant does not respects the Acl's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5103) Refactor AdminUtils to use zkUtils methods instad of zkUtils.zkClient

2017-04-21 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5103 started by Balint Molnar.

> Refactor AdminUtils to use zkUtils methods instad of zkUtils.zkClient
> -
>
> Key: KAFKA-5103
> URL: https://issues.apache.org/jira/browse/KAFKA-5103
> Project: Kafka
>  Issue Type: Sub-task
>  Components: admin
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Replace zkUtils.zkClient.createPersistentSequential(seqNode, content) to 
> zkUtils.createSequentialPersistentPath(seqNode, content).
> The zkClient variant does not respects the Acl's.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work stopped] (KAFKA-5102) Refactor core components to use zkUtils methods instead of zkUtils.zkClient

2017-04-21 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5102 stopped by Balint Molnar.

> Refactor core components to use zkUtils methods instead of zkUtils.zkClient
> ---
>
> Key: KAFKA-5102
> URL: https://issues.apache.org/jira/browse/KAFKA-5102
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Refactor every create*, update*, getSequenceId methods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5102) Refactor core components to use zkUtils methods instead of zkUtils.zkClient

2017-04-21 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5102:
-
Component/s: (was: admin)
 core

> Refactor core components to use zkUtils methods instead of zkUtils.zkClient
> ---
>
> Key: KAFKA-5102
> URL: https://issues.apache.org/jira/browse/KAFKA-5102
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Refactor every create*, update*, getSequenceId methods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5102) Refactor core components to use zkUtils methods instead of zkUtils.zkClient

2017-04-21 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-5102:
-
Summary: Refactor core components to use zkUtils methods instead of 
zkUtils.zkClient  (was: Refactor AdminUtils to use zkUtils methods instead of 
zkUtils.zkClient)

> Refactor core components to use zkUtils methods instead of zkUtils.zkClient
> ---
>
> Key: KAFKA-5102
> URL: https://issues.apache.org/jira/browse/KAFKA-5102
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Refactor every create*, update*, getSequenceId methods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5102) Refactor AdminUtils to use zkUtils methods instead of zkUtils.zkClient

2017-04-21 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5102:


 Summary: Refactor AdminUtils to use zkUtils methods instead of 
zkUtils.zkClient
 Key: KAFKA-5102
 URL: https://issues.apache.org/jira/browse/KAFKA-5102
 Project: Kafka
  Issue Type: Bug
  Components: admin
Reporter: Balint Molnar
Assignee: Balint Molnar


Refactor every create*, update*, getSequenceId methods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5102) Refactor AdminUtils to use zkUtils methods instead of zkUtils.zkClient

2017-04-21 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5102 started by Balint Molnar.

> Refactor AdminUtils to use zkUtils methods instead of zkUtils.zkClient
> --
>
> Key: KAFKA-5102
> URL: https://issues.apache.org/jira/browse/KAFKA-5102
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>
> Refactor every create*, update*, getSequenceId methods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5101) Remove KafkaController's incrementControllerEpoch method parameter

2017-04-21 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5101 started by Balint Molnar.

> Remove KafkaController's incrementControllerEpoch method parameter 
> ---
>
> Key: KAFKA-5101
> URL: https://issues.apache.org/jira/browse/KAFKA-5101
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Trivial
>
> KAFKA-4814 replaced the zkClient.createPersistent method with 
> zkUtils.createPersistentPath so the zkClient parameter is no longer required.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5101) Remove KafkaController's incrementControllerEpoch method parameter

2017-04-21 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-5101:


 Summary: Remove KafkaController's incrementControllerEpoch method 
parameter 
 Key: KAFKA-5101
 URL: https://issues.apache.org/jira/browse/KAFKA-5101
 Project: Kafka
  Issue Type: Bug
  Components: controller
Reporter: Balint Molnar
Assignee: Balint Molnar
Priority: Trivial


KAFKA-4814 replaced the zkClient.createPersistent method with 
zkUtils.createPersistentPath so the zkClient parameter is no longer required.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-04-20 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976432#comment-15976432
 ] 

Balint Molnar commented on KAFKA-4814:
--

Thanks [~rsivaram], helping me out with this one:)

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Rajini Sivaram
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5028) convert kafka controller to a single-threaded event queue model

2017-04-13 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967653#comment-15967653
 ] 

Balint Molnar commented on KAFKA-5028:
--

[~onurkaraman] I am so exited about this. Is there anything I can help? :)

> convert kafka controller to a single-threaded event queue model
> ---
>
> Key: KAFKA-5028
> URL: https://issues.apache.org/jira/browse/KAFKA-5028
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>
> The goal of this ticket is to improve controller maintainability by 
> simplifying the controller's concurrency semantics. The controller code has a 
> lot of shared state between several threads using several concurrency 
> primitives. This makes the code hard to reason about.
> This ticket proposes we convert the controller to a single-threaded event 
> queue model. We add a new controller thread which processes events held in an 
> event queue. Note that this does not mean we get rid of all threads used by 
> the controller. We merely delegate all work that interacts with controller 
> local state to this single thread. With only a single thread accessing and 
> modifying the controller local state, we no longer need to worry about 
> concurrent access, which means we can get rid of the various concurrency 
> primitives used throughout the controller.
> Performance is expected to match existing behavior since the bulk of the 
> existing controller work today already happens sequentially in the ZkClient’s 
> single ZkEventThread.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-04-12 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965664#comment-15965664
 ] 

Balint Molnar commented on KAFKA-4814:
--

[~rsivaram] I think if we change JaasUtils.isZkSecurityEnabled function to 
controllerContext.zkUtils.isSecure does the trick 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala#L81.
 But I am not 100% sure about that. On the other hand maybe it is a good to 
wait until KAFKA-5028 is merged.

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Rajini Sivaram
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-04-06 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-4814:


Assignee: Rajini Sivaram  (was: Balint Molnar)

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Rajini Sivaram
>  Labels: newbie
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-04-06 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958759#comment-15958759
 ] 

Balint Molnar commented on KAFKA-4814:
--

[~rsivaram] Please help me with this. Sadly, I don't have time to deal with 
this. I started to work on this but I do not have a solution yet.

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-04-03 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953384#comment-15953384
 ] 

Balint Molnar commented on KAFKA-4814:
--

[~ijuma] Something odd happening here, or I don't understand something. There 
is a ZkUtils constructor where we have a parameter isZkSecurityEnabled 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala#L80
 . We are giving value to this parameter from two different? thing. First we 
are using the zookeeper.set.acl value for example in class 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaServer.scala#L325
and we are also using the JaasUtils.isZkSecurityEnabled method for example in 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConfigCommand.scala#L61

I think these are separate things which we need to handle differently. Or am I 
missing something here?  

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work stopped] (KAFKA-4938) Creating a connector with missing name parameter throws a NullPointerException

2017-04-03 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4938 stopped by Balint Molnar.

> Creating a connector with missing name parameter throws a NullPointerException
> --
>
> Key: KAFKA-4938
> URL: https://issues.apache.org/jira/browse/KAFKA-4938
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Sönke Liebau
>Assignee: Balint Molnar
>Priority: Minor
>  Labels: newbie
>
> Creating a connector via the rest api runs into a NullPointerException, when 
> omitting the name parameter in the request.
> {code}
> POST 127.0.0.1:8083/connectors
> {
>   "config": {
> "connector.class": "org.apache.kafka.connect.tools.MockSourceConnector",
> "tasks.max": "1",
> "topics": "test-topic"
>   }
> }
> {code}
> Results in a 500 return code, due to a NullPointerException being thrown when 
> checking the name for slashes 
> [here|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L91].
>  I believe this was introduced with the fix for 
> [KAFKA-4372|https://issues.apache.org/jira/browse/KAFKA-4372]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-03-29 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4814 started by Balint Molnar.

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-03-29 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946736#comment-15946736
 ] 

Balint Molnar commented on KAFKA-4814:
--

[~ijuma] Not yet, but I am going to start to work on this today and I hope I 
can finish until the weekend. If this is too late for the release fell free to 
reassign:)

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4938) Creating a connector with missing name parameter throws a NullPointerException

2017-03-27 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15942770#comment-15942770
 ] 

Balint Molnar commented on KAFKA-4938:
--

[~sliebau]: Sure, I have put just a little effort to this one, but I am not so 
familiar with  jax-rs so it is mainly searching and learning.

> Creating a connector with missing name parameter throws a NullPointerException
> --
>
> Key: KAFKA-4938
> URL: https://issues.apache.org/jira/browse/KAFKA-4938
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Sönke Liebau
>Assignee: Balint Molnar
>Priority: Minor
>  Labels: newbie
>
> Creating a connector via the rest api runs into a NullPointerException, when 
> omitting the name parameter in the request.
> {code}
> POST 127.0.0.1:8083/connectors
> {
>   "config": {
> "connector.class": "org.apache.kafka.connect.tools.MockSourceConnector",
> "tasks.max": "1",
> "topics": "test-topic"
>   }
> }
> {code}
> Results in a 500 return code, due to a NullPointerException being thrown when 
> checking the name for slashes 
> [here|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L91].
>  I believe this was introduced with the fix for 
> [KAFKA-4372|https://issues.apache.org/jira/browse/KAFKA-4372]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4938) Creating a connector with missing name parameter throws a NullPointerException

2017-03-24 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-4938:


Assignee: Balint Molnar

> Creating a connector with missing name parameter throws a NullPointerException
> --
>
> Key: KAFKA-4938
> URL: https://issues.apache.org/jira/browse/KAFKA-4938
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Sönke Liebau
>Assignee: Balint Molnar
>Priority: Minor
>  Labels: newbie
>
> Creating a connector via the rest api runs into a NullPointerException, when 
> omitting the name parameter in the request.
> {code}
> POST 127.0.0.1:8083/connectors
> {
>   "config": {
> "connector.class": "org.apache.kafka.connect.tools.MockSourceConnector",
> "tasks.max": "1",
> "topics": "test-topic"
>   }
> }
> {code}
> Results in a 500 return code, due to a NullPointerException being thrown when 
> checking the name for slashes 
> [here|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L91].
>  I believe this was introduced with the fix for 
> [KAFKA-4372|https://issues.apache.org/jira/browse/KAFKA-4372]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-4938) Creating a connector with missing name parameter throws a NullPointerException

2017-03-24 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4938 started by Balint Molnar.

> Creating a connector with missing name parameter throws a NullPointerException
> --
>
> Key: KAFKA-4938
> URL: https://issues.apache.org/jira/browse/KAFKA-4938
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Sönke Liebau
>Assignee: Balint Molnar
>Priority: Minor
>  Labels: newbie
>
> Creating a connector via the rest api runs into a NullPointerException, when 
> omitting the name parameter in the request.
> {code}
> POST 127.0.0.1:8083/connectors
> {
>   "config": {
> "connector.class": "org.apache.kafka.connect.tools.MockSourceConnector",
> "tasks.max": "1",
> "topics": "test-topic"
>   }
> }
> {code}
> Results in a 500 return code, due to a NullPointerException being thrown when 
> checking the name for slashes 
> [here|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L91].
>  I believe this was introduced with the fix for 
> [KAFKA-4372|https://issues.apache.org/jira/browse/KAFKA-4372]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4855) Struct SchemaBuilder should not allow duplicate fields.

2017-03-24 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-4855:


Assignee: Balint Molnar

> Struct SchemaBuilder should not allow duplicate fields.
> ---
>
> Key: KAFKA-4855
> URL: https://issues.apache.org/jira/browse/KAFKA-4855
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Jeremy Custenborder
>Assignee: Balint Molnar
>  Labels: newbie
>
> I would expect this to fail at the build() on schema. It actually makes it 
> all the way to Struct.validate() and throws a cryptic error message. .field() 
> should throw an exception if a field is already used.
> Repro:
> {code}
>   @Test
>   public void duplicateFields() {
> final Schema schema = SchemaBuilder.struct()
> .name("testing")
> .field("id", SchemaBuilder.string().doc("").build())
> .field("id", SchemaBuilder.string().doc("").build())
> .build();
> final Struct struct = new Struct(schema)
> .put("id", "testing");
> struct.validate();
>   }
> {code}
> {code}
> org.apache.kafka.connect.errors.DataException: Invalid value: null used for 
> required field at 
> org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:212)
>   at org.apache.kafka.connect.data.Struct.validate(Struct.java:232)
>   at 
> io.confluent.kafka.connect.jms.RecordConverterTest.duplicateFieldRepro(RecordConverterTest.java:289)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-4855) Struct SchemaBuilder should not allow duplicate fields.

2017-03-24 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4855 started by Balint Molnar.

> Struct SchemaBuilder should not allow duplicate fields.
> ---
>
> Key: KAFKA-4855
> URL: https://issues.apache.org/jira/browse/KAFKA-4855
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Jeremy Custenborder
>Assignee: Balint Molnar
>  Labels: newbie
>
> I would expect this to fail at the build() on schema. It actually makes it 
> all the way to Struct.validate() and throws a cryptic error message. .field() 
> should throw an exception if a field is already used.
> Repro:
> {code}
>   @Test
>   public void duplicateFields() {
> final Schema schema = SchemaBuilder.struct()
> .name("testing")
> .field("id", SchemaBuilder.string().doc("").build())
> .field("id", SchemaBuilder.string().doc("").build())
> .build();
> final Struct struct = new Struct(schema)
> .put("id", "testing");
> struct.validate();
>   }
> {code}
> {code}
> org.apache.kafka.connect.errors.DataException: Invalid value: null used for 
> required field at 
> org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:212)
>   at org.apache.kafka.connect.data.Struct.validate(Struct.java:232)
>   at 
> io.confluent.kafka.connect.jms.RecordConverterTest.duplicateFieldRepro(RecordConverterTest.java:289)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work stopped] (KAFKA-1954) Speed Up The Unit Tests

2017-03-01 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-1954 stopped by Balint Molnar.

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Balint Molnar
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work stopped] (KAFKA-1548) Refactor the "replica_id" in requests

2017-03-01 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-1548 stopped by Balint Molnar.

> Refactor the "replica_id" in requests
> -
>
> Key: KAFKA-1548
> URL: https://issues.apache.org/jira/browse/KAFKA-1548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Balint Molnar
>Priority: Minor
>  Labels: newbie
> Fix For: 0.10.3.0
>
>
> Today in many requests like fetch and offset we have a integer replica_id 
> field, if the request is from a follower consumer it is the broker id from 
> that follower replica, if it is from a regular consumer it could be one of 
> the two values: "-1" for ordinary consumer, or "-2" for debugging consumer. 
> Hence this replica_id field is used in two folds:
> 1) Logging for trouble shooting in request logs, which can be helpful only 
> when this is from a follower replica, 
> 2) Deciding if it is from the consumer or a replica to logically handle the 
> request in different ways. For this purpose we do not really care about the 
> actually id value.
> We probably would like to do the following improvements:
> 1) Rename "replica_id" to sth. less confusing?
> 2) Change the request.toString() function based on the replica_id, whether it 
> is a positive integer (meaning from a broker replica fetcher) or -1/-2 
> (meaning from a regular consumer).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-03-01 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-4814:


Assignee: Balint Molnar

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.10.3.0, 0.10.2.1
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4703) Test with two SASL_SSL listeners with different JAAS contexts

2017-02-22 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar updated KAFKA-4703:
-
Description: 
[~rsivaram] suggested the following in https://github.com/apache/kafka/pull/2406

{quote}
I think this feature allows two SASL_SSL listeners, one for external and one 
for internal and the two can use different mechanisms and different JAAS 
contexts. That makes the multi-mechanism configuration neater. I think it will 
be useful to have an integration test for this, perhaps change 
SaslMultiMechanismConsumerTest.
{quote}

And my reply:

{quote}
I think it's a bit tricky to support multiple listeners in 
KafkaServerTestHarness. Maybe it's easier to do the test you suggest in 
MultipleListenersWithSameSecurityProtocolTest.
{quote}

  was:
[~rsivaram] suggested the following in 
https://github.com/apache/kafka/pull/2406:

{quote}
I think this feature allows two SASL_SSL listeners, one for external and one 
for internal and the two can use different mechanisms and different JAAS 
contexts. That makes the multi-mechanism configuration neater. I think it will 
be useful to have an integration test for this, perhaps change 
SaslMultiMechanismConsumerTest.
{quote}

And my reply:

{quote}
I think it's a bit tricky to support multiple listeners in 
KafkaServerTestHarness. Maybe it's easier to do the test you suggest in 
MultipleListenersWithSameSecurityProtocolTest.
{quote}


> Test with two SASL_SSL listeners with different JAAS contexts
> -
>
> Key: KAFKA-4703
> URL: https://issues.apache.org/jira/browse/KAFKA-4703
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Assignee: Balint Molnar
>  Labels: newbie
>
> [~rsivaram] suggested the following in 
> https://github.com/apache/kafka/pull/2406
> {quote}
> I think this feature allows two SASL_SSL listeners, one for external and one 
> for internal and the two can use different mechanisms and different JAAS 
> contexts. That makes the multi-mechanism configuration neater. I think it 
> will be useful to have an integration test for this, perhaps change 
> SaslMultiMechanismConsumerTest.
> {quote}
> And my reply:
> {quote}
> I think it's a bit tricky to support multiple listeners in 
> KafkaServerTestHarness. Maybe it's easier to do the test you suggest in 
> MultipleListenersWithSameSecurityProtocolTest.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-4703) Test with two SASL_SSL listeners with different JAAS contexts

2017-02-20 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4703 started by Balint Molnar.

> Test with two SASL_SSL listeners with different JAAS contexts
> -
>
> Key: KAFKA-4703
> URL: https://issues.apache.org/jira/browse/KAFKA-4703
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Assignee: Balint Molnar
>  Labels: newbie
>
> [~rsivaram] suggested the following in 
> https://github.com/apache/kafka/pull/2406:
> {quote}
> I think this feature allows two SASL_SSL listeners, one for external and one 
> for internal and the two can use different mechanisms and different JAAS 
> contexts. That makes the multi-mechanism configuration neater. I think it 
> will be useful to have an integration test for this, perhaps change 
> SaslMultiMechanismConsumerTest.
> {quote}
> And my reply:
> {quote}
> I think it's a bit tricky to support multiple listeners in 
> KafkaServerTestHarness. Maybe it's easier to do the test you suggest in 
> MultipleListenersWithSameSecurityProtocolTest.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4703) Test with two SASL_SSL listeners with different JAAS contexts

2017-01-27 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-4703:


Assignee: Balint Molnar

> Test with two SASL_SSL listeners with different JAAS contexts
> -
>
> Key: KAFKA-4703
> URL: https://issues.apache.org/jira/browse/KAFKA-4703
> Project: Kafka
>  Issue Type: Test
>Reporter: Ismael Juma
>Assignee: Balint Molnar
>  Labels: newbie
>
> [~rsivaram] suggested the following in 
> https://github.com/apache/kafka/pull/2406:
> {quote}
> I think this feature allows two SASL_SSL listeners, one for external and one 
> for internal and the two can use different mechanisms and different JAAS 
> contexts. That makes the multi-mechanism configuration neater. I think it 
> will be useful to have an integration test for this, perhaps change 
> SaslMultiMechanismConsumerTest.
> {quote}
> And my reply:
> {quote}
> I think it's a bit tricky to support multiple listeners in 
> KafkaServerTestHarness. Maybe it's easier to do the test you suggest in 
> MultipleListenersWithSameSecurityProtocolTest.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4679) Remove unstable markers from Connect APIs

2017-01-23 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15834402#comment-15834402
 ] 

Balint Molnar commented on KAFKA-4679:
--

[~ewencp] can I help with this?

> Remove unstable markers from Connect APIs
> -
>
> Key: KAFKA-4679
> URL: https://issues.apache.org/jira/browse/KAFKA-4679
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.2.0
>
>
> Connect has had a stable API for awhile now and we are careful about 
> compatibility. It's safe to remove the unstable markers now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4403) Update KafkaBasedLog to use new endOffsets consumer API

2016-11-27 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4403 started by Balint Molnar.

> Update KafkaBasedLog to use new endOffsets consumer API
> ---
>
> Key: KAFKA-4403
> URL: https://issues.apache.org/jira/browse/KAFKA-4403
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Balint Molnar
>Priority: Minor
>  Labels: newbie
>
> As of 0.10.1.0 and KIP-79 
> (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090) 
> KafkaConsumer can now fetch offset information about topic partitions. 
> Previously KafkaBasedLog had to use a seekToEnd + position approach to 
> determine end offsets. With the new APIs we can simplify this code.
> This isn't critical as the current code works fine, but would be a nice 
> cleanup and simplification.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4403) Update KafkaBasedLog to use new endOffsets consumer API

2016-11-25 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15696141#comment-15696141
 ] 

Balint Molnar commented on KAFKA-4403:
--

Hi, Can I work on this?

> Update KafkaBasedLog to use new endOffsets consumer API
> ---
>
> Key: KAFKA-4403
> URL: https://issues.apache.org/jira/browse/KAFKA-4403
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>  Labels: newbie
>
> As of 0.10.1.0 and KIP-79 
> (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65868090) 
> KafkaConsumer can now fetch offset information about topic partitions. 
> Previously KafkaBasedLog had to use a seekToEnd + position approach to 
> determine end offsets. With the new APIs we can simplify this code.
> This isn't critical as the current code works fine, but would be a nice 
> cleanup and simplification.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2016-11-23 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-4307:


Assignee: Balint Molnar  (was: Manasvi Gupta)

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Balint Molnar
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2016-11-21 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15683783#comment-15683783
 ] 

Balint Molnar commented on KAFKA-4307:
--

Hi, [~manasvigupta] can I take this?

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-1548) Refactor the "replica_id" in requests

2016-11-15 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-1548 started by Balint Molnar.

> Refactor the "replica_id" in requests
> -
>
> Key: KAFKA-1548
> URL: https://issues.apache.org/jira/browse/KAFKA-1548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.10.2.0
>
>
> Today in many requests like fetch and offset we have a integer replica_id 
> field, if the request is from a follower consumer it is the broker id from 
> that follower replica, if it is from a regular consumer it could be one of 
> the two values: "-1" for ordinary consumer, or "-2" for debugging consumer. 
> Hence this replica_id field is used in two folds:
> 1) Logging for trouble shooting in request logs, which can be helpful only 
> when this is from a follower replica, 
> 2) Deciding if it is from the consumer or a replica to logically handle the 
> request in different ways. For this purpose we do not really care about the 
> actually id value.
> We probably would like to do the following improvements:
> 1) Rename "replica_id" to sth. less confusing?
> 2) Change the request.toString() function based on the replica_id, whether it 
> is a positive integer (meaning from a broker replica fetcher) or -1/-2 
> (meaning from a regular consumer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1548) Refactor the "replica_id" in requests

2016-11-15 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-1548:


Assignee: Balint Molnar  (was: Gwen Shapira)

> Refactor the "replica_id" in requests
> -
>
> Key: KAFKA-1548
> URL: https://issues.apache.org/jira/browse/KAFKA-1548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.10.2.0
>
>
> Today in many requests like fetch and offset we have a integer replica_id 
> field, if the request is from a follower consumer it is the broker id from 
> that follower replica, if it is from a regular consumer it could be one of 
> the two values: "-1" for ordinary consumer, or "-2" for debugging consumer. 
> Hence this replica_id field is used in two folds:
> 1) Logging for trouble shooting in request logs, which can be helpful only 
> when this is from a follower replica, 
> 2) Deciding if it is from the consumer or a replica to logically handle the 
> request in different ways. For this purpose we do not really care about the 
> actually id value.
> We probably would like to do the following improvements:
> 1) Rename "replica_id" to sth. less confusing?
> 2) Change the request.toString() function based on the replica_id, whether it 
> is a positive integer (meaning from a broker replica fetcher) or -1/-2 
> (meaning from a regular consumer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3151) kafka-consumer-groups.sh fail with sasl enabled

2016-11-04 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15636247#comment-15636247
 ] 

Balint Molnar commented on KAFKA-3151:
--

[~linbao111] please create a for example grouprop.properties
Put the following line into the file:
{code}
security.protocol=SASL_PLAINTEXT
{code} 
Use the following command:
{code}
./bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server 
slave1.otocyon.com:9092 --list --command-config groupprop.properties
{code}

> kafka-consumer-groups.sh fail with sasl enabled 
> 
>
> Key: KAFKA-3151
> URL: https://issues.apache.org/jira/browse/KAFKA-3151
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: redhat as6.5
>Reporter: linbao111
>
> ./bin/kafka-consumer-groups.sh --new-consumer  --bootstrap-server 
> slave1.otocyon.com:9092 --list
> Error while executing consumer group command Request METADATA failed on 
> brokers List(Node(-1, slave1.otocyon.com, 9092))
> java.lang.RuntimeException: Request METADATA failed on brokers List(Node(-1, 
> slave1.otocyon.com, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findAllBrokers(AdminClient.scala:93)
> at kafka.admin.AdminClient.listAllGroups(AdminClient.scala:101)
> at 
> kafka.admin.AdminClient.listAllGroupsFlattened(AdminClient.scala:122)
> at 
> kafka.admin.AdminClient.listAllConsumerGroupsFlattened(AdminClient.scala:126)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.list(ConsumerGroupCommand.scala:310)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:61)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> same error for:
> bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand  --bootstrap-server 
> slave16:9092,app:9092 --describe --group test-consumer-group  --new-consumer
> Error while executing consumer group command Request GROUP_COORDINATOR failed 
> on brokers List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> java.lang.RuntimeException: Request GROUP_COORDINATOR failed on brokers 
> List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findCoordinator(AdminClient.scala:78)
> at kafka.admin.AdminClient.describeGroup(AdminClient.scala:130)
> at 
> kafka.admin.AdminClient.describeConsumerGroup(AdminClient.scala:152)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:314)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:84)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describe(ConsumerGroupCommand.scala:302)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:63)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4318) Migrate ProducerSendTest to the new consumer

2016-10-19 Thread Balint Molnar (JIRA)
Balint Molnar created KAFKA-4318:


 Summary: Migrate ProducerSendTest to the new consumer
 Key: KAFKA-4318
 URL: https://issues.apache.org/jira/browse/KAFKA-4318
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Balint Molnar
Assignee: Balint Molnar
Priority: Minor


BaseProducerSendTest contains a 
TODO: "we need to migrate to new consumers when 0.9 is final"




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4318) Migrate ProducerSendTest to the new consumer

2016-10-19 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4318 started by Balint Molnar.

> Migrate ProducerSendTest to the new consumer
> 
>
> Key: KAFKA-4318
> URL: https://issues.apache.org/jira/browse/KAFKA-4318
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
>
> BaseProducerSendTest contains a 
> TODO: "we need to migrate to new consumers when 0.9 is final"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1954) Speed Up The Unit Tests

2016-09-28 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-1954:


Assignee: Balint Molnar  (was: Sriharsha Chintalapani)

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Balint Molnar
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1954) Speed Up The Unit Tests

2016-09-20 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507561#comment-15507561
 ] 

Balint Molnar commented on KAFKA-1954:
--

Thanks, [~ijuma].
I realized nearly every test case recreates the server infra (kafka/zookeeper) 
before itself even if it's not needed, so first I would like to refactor the 
classes to restart the infra only the required times.

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1954) Speed Up The Unit Tests

2016-09-20 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507430#comment-15507430
 ] 

Balint Molnar commented on KAFKA-1954:
--

[~sriharsha] if you are not working on this, do you mind if I give it a try?

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2081) testUncleanLeaderElectionEnabledByTopicOverride transient failure

2016-05-31 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307492#comment-15307492
 ] 

Balint Molnar commented on KAFKA-2081:
--

[~junrao] I think this one is not happening any more, I cannot reproduce this 
on my local machine

> testUncleanLeaderElectionEnabledByTopicOverride transient failure
> -
>
> Key: KAFKA-2081
> URL: https://issues.apache.org/jira/browse/KAFKA-2081
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>  Labels: transient-unit-test-failure
>
> Saw the following failure.
> kafka.integration.UncleanLeaderElectionTest > 
> testUncleanLeaderElectionEnabledByTopicOverride FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.integration.UncleanLeaderElectionTest.verifyUncleanLeaderElectionEnabled(UncleanLeaderElectionTest.scala:179)
> at 
> kafka.integration.UncleanLeaderElectionTest.testUncleanLeaderElectionEnabledByTopicOverride(UncleanLeaderElectionTest.scala:135)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1573) Transient test failures on LogTest.testCorruptLog

2016-05-31 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307481#comment-15307481
 ] 

Balint Molnar commented on KAFKA-1573:
--

[~gwenshap] I think this one is not failing any more, I tried to reproduce on 
my local machine, but it always passed for me.

> Transient test failures on LogTest.testCorruptLog
> -
>
> Key: KAFKA-1573
> URL: https://issues.apache.org/jira/browse/KAFKA-1573
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
> Fix For: 0.10.1.0
>
>
> Here is an example of the test failure trace:
> junit.framework.AssertionFailedError: expected:<87> but was:<68>
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.failNotEquals(Assert.java:277)
>   at junit.framework.Assert.assertEquals(Assert.java:64)
>   at junit.framework.Assert.assertEquals(Assert.java:130)
>   at junit.framework.Assert.assertEquals(Assert.java:136)
>   at 
> kafka.log.LogTest$$anonfun$testCorruptLog$1.apply$mcVI$sp(LogTest.scala:615)
>   at 
> scala.collection.immutable.Range$ByOne$class.foreach$mVc$sp(Range.scala:282)
>   at 
> scala.collection.immutable.Range$$anon$2.foreach$mVc$sp(Range.scala:265)
>   at kafka.log.LogTest.testCorruptLog(LogTest.scala:595)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99)
>   at 
> org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81)
>   at 
> org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
>   at 
> org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75)
>   at 
> org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45)
>   at 
> org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:71)
>   at 
> org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35)
>   at 
> org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42)
>   at 
> org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
>   at 
> org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at $Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355)
>   at 
> 

[jira] [Commented] (KAFKA-1534) transient unit test failure in testBasicPreferredReplicaElection

2016-05-31 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307435#comment-15307435
 ] 

Balint Molnar commented on KAFKA-1534:
--

[~nehanarkhede] I think this issue is already fixed, because I cannot reproduce 
it on my machine.

> transient unit test failure in testBasicPreferredReplicaElection
> 
>
> Key: KAFKA-1534
> URL: https://issues.apache.org/jira/browse/KAFKA-1534
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Abhishek Sharma
>  Labels: newbie, transient-unit-test-failure
>
> Saw the following transient failure. 
> kafka.admin.AdminTest > testBasicPreferredReplicaElection FAILED
> junit.framework.AssertionFailedError: Timing out after 5000 ms since 
> leader is not elected or changed for partition [test,1]
> at junit.framework.Assert.fail(Assert.java:47)
> at 
> kafka.utils.TestUtils$.waitUntilLeaderIsElectedOrChanged(TestUtils.scala:542)
> at 
> kafka.admin.AdminTest.testBasicPreferredReplicaElection(AdminTest.scala:310)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)