Build failed in Jenkins: kafka-1.1-jdk7 #139

2018-06-01 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6760: Fix response logging in the Controller (#4834)

--
[...truncated 226.15 KB...]

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserQuotaTest > testThrottledRequest STARTED

kafka.api.UserQuotaTest > testThrottledRequest PASSED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic STARTED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero STARTED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #2692

2018-06-01 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6760: Fix response logging in the Controller (#4834)

--
[...truncated 463.54 KB...]
kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerifyOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerifyOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 

[jira] [Created] (KAFKA-6983) Error while deleting segments - The process cannot access the file because it is being used by another process

2018-06-01 Thread wade wu (JIRA)
wade wu created KAFKA-6983:
--

 Summary: Error while deleting segments - The process cannot access 
the file because it is being used by another process
 Key: KAFKA-6983
 URL: https://issues.apache.org/jira/browse/KAFKA-6983
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 1.1.0
 Environment: Windows 10
Reporter: wade wu


..

[2018-06-01 17:00:07,566] ERROR Error while deleting segments for test4-1 in 
dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: 
D:\data\Kafka\kafka-logs\test4-1\.log -> 
D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The process 
cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
 at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
 at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601)
 at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588)
 at 
kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
 at 
kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
 at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
 at kafka.log.Log.deleteSegments(Log.scala:1161)
 at kafka.log.Log.deleteOldSegments(Log.scala:1156)
 at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228)
 at kafka.log.Log.deleteOldSegments(Log.scala:1222)
 at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854)
 at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852)
 at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
 at kafka.log.LogManager.cleanupLogs(LogManager.scala:852)
 at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385)
 at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Suppressed: java.nio.file.FileSystemException: 
D:\data\Kafka\kafka-logs\test4-1\.log -> 
D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The process 
cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
 ... 32 more

 

..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-01 Thread wade wu (JIRA)
wade wu created KAFKA-6982:
--

 Summary: java.lang.ArithmeticException: / by zero
 Key: KAFKA-6982
 URL: https://issues.apache.org/jira/browse/KAFKA-6982
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 1.1.0
 Environment: Environment: Windows 10. 

Reporter: wade wu


Producer keeps sending messages to Kafka, Kafka is down. 

Server.log shows: 

..

[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)

..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6965) log4j:WARN log messages printed when running kafka-console-producer OOB

2018-06-01 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6965.

Resolution: Not A Bug

> log4j:WARN log messages printed when running kafka-console-producer OOB
> ---
>
> Key: KAFKA-6965
> URL: https://issues.apache.org/jira/browse/KAFKA-6965
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Yeva Byzek
>Priority: Major
>  Labels: newbie
>
> This error message is presented running `bin/kafka-console-producer` out of 
> the box.  
> {noformat}
> log4j:WARN No appenders could be found for logger 
> (kafka.utils.Log4jControllerRegistration$).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6760) responses not logged properly in controller

2018-06-01 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6760.

   Resolution: Fixed
Fix Version/s: 1.1.1
   2.0.0

> responses not logged properly in controller
> ---
>
> Key: KAFKA-6760
> URL: https://issues.apache.org/jira/browse/KAFKA-6760
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Mickael Maison
>Priority: Major
>  Labels: newbie
> Fix For: 2.0.0, 1.1.1
>
>
> Saw the following logging in controller.log. We need to log the 
> StopReplicaResponse properly in KafkaController.
> [2018-04-05 14:38:41,878] DEBUG [Controller id=0] Delete topic callback 
> invoked for org.apache.kafka.common.requests.StopReplicaResponse@263d40c 
> (kafka.controller.K
> afkaController)
> It seems that the same issue exists for LeaderAndIsrResponse as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6981) Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect Clusters

2018-06-01 Thread Arjun Satish (JIRA)
Arjun Satish created KAFKA-6981:
---

 Summary: Missing Connector Config 
(errors.deadletterqueue.topic.name) kills Connect Clusters
 Key: KAFKA-6981
 URL: https://issues.apache.org/jira/browse/KAFKA-6981
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Arjun Satish
Assignee: Arjun Satish
 Fix For: 2.0.0


The trunk version of AK currently tries to incorrectly read the property 
(errors.deadletterqueue.topic.name) when starting a sink connector. This fails 
no matter what the contents of the connector config are. The ConnectorConfig 
does not define this property, and any calls to getString will throw a 
ConfigException. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.0-jdk7 #194

2018-06-01 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6925: fix parentSensors memory leak (#5108) (#5119)

--
[...truncated 373.13 KB...]

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 

[DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2018-06-01 Thread Arseniy Tashoyan
Hi,

I have just created KIP that proposes enhancements to GetOffsetShell
command line tool.
KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-308%3A+GetOffsetShell%3A+new+KafkaConsumer+API%2C+support+for+multiple+topics%2C+minimize+the+number+of+requests+to+server

PR: https://github.com/apache/kafka/pull/3051

Suggestions are welcome.

Regards,
Arseniy


Someone to review KAFKA-6919, one line change for faulty documentation

2018-06-01 Thread Koen De Groote
Greetings,

Poking for someone to have a quick look at this, It's a one-line change. I
noticed the documentation about trogdor was pointing to a non-existing
folder.

Ticket: https://issues.apache.org/jira/browse/KAFKA-6919

PR: https://github.com/apache/kafka/pull/5040

Thanks.


Re: [DISCUSS] KIP-280: Enhanced log compaction

2018-06-01 Thread Guozhang Wang
Hello Luis,

Please feel free to continue on the voting process as there seems be no
further comments on this thread (I have synced with Jun and Ismael
separately offline and they are in consent with the approach to add the
fields in offset map for all cases).

We can still continue on reviewing the PR while voting on the thread so
that it can get in earlier into trunk for the next release.



Guozhang


On Mon, May 28, 2018 at 11:04 AM, Matthias J. Sax 
wrote:

> Luis,
>
> this week is feature freeze for the upcoming 2.0 release and most people
> focus on getting their PR merged. Thus, this and the next week (until
> code freeze) KIPs for 2.1 are not a high priority for most people.
>
> Please bear with us. Thanks for your understanding.
>
>
> -Matthias
>
> On 5/28/18 5:21 AM, Luís Cabral wrote:
> >  Hi Guozhang,
> >
> > It doesn't look like there will be much feedback here.
> > Is it alright if I just update the spec back to a standardized behaviour
> and move this along?
> >
> > Cheers,Luis
> > On Thursday, May 24, 2018, 11:20:01 AM GMT+2, Luis Cabral <
> luis_cab...@yahoo.com> wrote:
> >
> >  Hi Jun / Ismael,
> >
> > Any chance to get your opinion on this?
> > Thanks in advance!
> >
> > Regards,
> > Luís
> >
> >> On 22 May 2018, at 17:30, Guozhang Wang  wrote:
> >>
> >> Hello Luís,
> >>
> >> While reviewing your PR I realized my previous calculation on the memory
> >> usage was incorrect: in fact, in the current implementation, each entry
> in
> >> the memory-bounded cache is 16 (default MD5 hash digest length) + 8
> (long
> >> type) = 24 bytes, and if we add the long-typed version value it is 32
> >> bytes. I.e. each entry will be increased by 33%, not doubling.
> >>
> >> After redoing the math I'm bit leaning towards just adding this entry
> for
> >> all cases rather than treating timestamp differently with others (sorry
> for
> >> being back and forth, but I just want to make sure we've got a good
> balance
> >> between efficiency and semantics consistency). I've also chatted with
> Jun
> >> and Ismael about this (cc'ed), and maybe you guys can chime in here as
> well.
> >>
> >>
> >> Guozhang
> >>
> >>
> >> On Tue, May 22, 2018 at 6:45 AM, Luís Cabral
> 
> >> wrote:
> >>
> >>> Hi Matthias / Guozhang,
> >>>
> >>> Were the questions clarified?
> >>> Please feel free to add more feedback, otherwise it would be nice to
> move
> >>> this topic onwards 
> >>>
> >>> Kind Regards,
> >>> Luís Cabral
> >>>
> >>> From: Guozhang Wang
> >>> Sent: 09 May 2018 20:00
> >>> To: dev@kafka.apache.org
> >>> Subject: Re: [DISCUSS] KIP-280: Enhanced log compaction
> >>>
> >>> I have thought about being consistency in strategy v.s. practical
> concerns
> >>> about storage convenience to its impact on compaction effectiveness.
> >>>
> >>> The different between timestamp and the header key-value pairs is that
> for
> >>> the latter, as I mentioned before, "it is arguably out of Kafka's
> control,
> >>> and indeed users may (mistakenly) generate many records with the same
> key
> >>> and the same header value." So giving up tie breakers may result in
> very
> >>> poor compaction effectiveness when it happens, while for timestamps the
> >>> likelihood of this is considered very small.
> >>>
> >>>
> >>> Guozhang
> >>>
> >>>
> >>> On Sun, May 6, 2018 at 8:55 PM, Matthias J. Sax  >
> >>> wrote:
> >>>
>  Thanks.
> 
>  To reverse the question: if this argument holds, why does it not apply
>  to the case when the header key is used as compaction attribute?
> 
>  I am not against keeping both records in case timestamps are equal,
> but
>  shouldn't we apply the same strategy for all cases and don't use
> offset
>  as tie-breaker at all?
> 
> 
>  -Matthias
> 
> > On 5/6/18 8:47 PM, Guozhang Wang wrote:
> > Hello Matthias,
> >
> > The related discussion was in the PR:
> > https://github.com/apache/kafka/pull/4822#discussion_r184588037
> >
> > The concern is that, to use offset as tie breaker we need to double
> the
> > entry size of the entry in bounded compaction cache, and hence
> largely
> > reduce the effectiveness of the compaction itself. Since with
>  milliseconds
> > timestamp the scenario of ties with the same key is expected to be
>  small, I
> > think it would be a reasonable tradeoff to make.
> >
> >
> > Guozhang
> >
> > On Sun, May 6, 2018 at 9:37 AM, Matthias J. Sax <
> matth...@confluent.io
> 
> > wrote:
> >
> >> Hi,
> >>
> >> I just updated myself on this KIP. One question (maybe it was
> >>> discussed
> >> and I missed it). What is the motivation to not use the offset as
> tie
> >> breaker for the "timestamp" case? Isn't this inconsistent behavior?
> >>
> >>
> >> -Matthias
> >>
> >>
> >>> On 5/2/18 2:07 PM, Guozhang Wang wrote:
> >>> Hello Luís,
> >>>
> >>> Sorry for the late reply.
> >>>
> >>> My understanding is that 

[jira] [Resolved] (KAFKA-5424) KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in cluster

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5424.
--
Resolution: Fixed

This has been fixed via KAFKA-3396

> KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in 
> cluster
> -
>
> Key: KAFKA-5424
> URL: https://issues.apache.org/jira/browse/KAFKA-5424
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Mike Fagan
>Assignee: Mickael Maison
>Priority: Major
>
> KafkaConsumer.listTopics() internally calls Fetcher. 
> getAllTopicMetadata(timeout) and this method will throw a 
> TopicAuthorizationException when there exists an unauthorized topic in the 
> cluster. 
> This behavior runs counter to the API docs and makes listTopics() unusable 
> except in the case of the consumer is authorized for every single topic in 
> the cluster. 
> A potentially better approach is to have Fetcher implement a new method 
> getAuthorizedTopicMetadata(timeout)  and have KafkaConsumer call this method 
> instead of getAllTopicMetadata(timeout) from within KafkaConsumer.listTopics()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-231: Improve the Required ACL of ListGroups API

2018-06-01 Thread Vahid S Hashemian
I'm bumping this vote thread up as the KIP requires only one binding +1 to 
pass.
The KIP is very similar in nature to the recently approved KIP-277 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-277+-+Fine+Grained+ACL+for+CreateTopics+API
) and proposes a small improvement to make APIs' minimum required 
permissions more consistent.

Thanks.
--Vahid




From:   Vahid S Hashemian/Silicon Valley/IBM
To: dev 
Date:   12/19/2017 11:30 AM
Subject:[VOTE] KIP-231: Improve the Required ACL of ListGroups API


I believe the concerns on this KIP have been addressed so far.
Therefore, I'd like to start a vote.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-231%3A+Improve+the+Required+ACL+of+ListGroups+API

Thanks.
--Vahid





[jira] [Resolved] (KAFKA-5304) Kafka Producer throwing infinite NullPointerExceptions

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5304.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists.

> Kafka Producer throwing infinite NullPointerExceptions
> --
>
> Key: KAFKA-5304
> URL: https://issues.apache.org/jira/browse/KAFKA-5304
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1
> Environment: RedHat Enterprise Linux 6.8
>Reporter: Pranay Kumar Chaudhary
>Priority: Major
>
> 2017-05-22 11:38:56,918 LL="ERROR" TR="kafka-producer-network-thread | 
> application-name.hostname.com" LN="o.a.k.c.p.i.Sender"  Uncaught error in 
> kafka producer I/O thread:
> java.lang.NullPointerException: null
> Continuously getting this error in logs which is filling up the disk space. 
> Not able to get a stack trace to pinpoint the source of the error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4751) kafka-clients-0.9.0.2.4.2.11-1 issue not throwing exception.

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4751.
--
Resolution: Fixed

This is addressed by KIP-266. 

> kafka-clients-0.9.0.2.4.2.11-1 issue not throwing exception.
> 
>
> Key: KAFKA-4751
> URL: https://issues.apache.org/jira/browse/KAFKA-4751
> Project: Kafka
>  Issue Type: Bug
> Environment: kafka-clients-0.9.0.2.4.2.11-1 java based client
>Reporter: Avinash Kumar Gaur
>Priority: Major
>
> While running consumer with kafka-clients-0.9.0.2.4.2.11-1.jar and connecting 
> directly with broker, kafka consumer is not throwing any exception, if broker 
> is down.
> 1)Create client with kafka-clients-0.9.0.2.4.2.11-1.jar.
> 2)Do not start kafka broker.
> 3)Start kafka consumer with required properties.
> Observation - As you may see consumer is not throwing any exception even if 
> broker is down.
> Expected - It should throw exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4571) Consumer fails to retrieve messages if started before producer

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4571.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists.

> Consumer fails to retrieve messages if started before producer
> --
>
> Key: KAFKA-4571
> URL: https://issues.apache.org/jira/browse/KAFKA-4571
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.1
> Environment: Ubuntu Desktop 16.04 LTS, Oracle Java 8 1.8.0_101, Core 
> i7 4770K
>Reporter: Sergiu Hlihor
>Priority: Major
>
> In a configuration where topic was never created before, starting the 
> consumer before the producer leads to no message being consumed 
> (KafkaConsumer.pool() returns always an instance of ConsumerRecords with 0 
> count ). 
> Starting another consumer on the same group, same topic after messages were 
> produced is still not consuming them. Starting another consumer with another 
> groupId appears to be working.
> In the consumer logs I see: WARN  NetworkClient - Error while fetching 
> metadata with correlation id 1 : {measurements021=LEADER_NOT_AVAILABLE} 
> Both producer and consumer were launched from inside same JVM. 
> The configuration used is the standard one found in Kafka distribution. If 
> this is a configuration issue, please suggest any change that I should do.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3822) Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while connected

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3822.
--
Resolution: Fixed

This is addressed by KIP-266. 

> Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while 
> connected
> --
>
> Key: KAFKA-3822
> URL: https://issues.apache.org/jira/browse/KAFKA-3822
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1, 0.10.0.0
> Environment: x86 Red Hat 6 (1 broker running zookeeper locally, 
> client running on a separate server)
>Reporter: Alexander Cook
>Assignee: Ashish Singh
>Priority: Major
>
> I am using the KafkaConsumer java client to consume messages. My application 
> shuts down smoothly if I am connected to a Kafka broker, or if I never 
> succeed at connecting to a Kafka broker, but if the broker is shut down while 
> my consumer is connected to it, consumer.close() hangs indefinitely. 
> Here is how I reproduce it: 
> 1. Start 0.9.0.1 Kafka Broker
> 2. Start consumer application and consume messages
> 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script)
> 4. Try to stop application...hangs at consumer.close() indefinitely. 
> I also see this same behavior using 0.10 broker and client. 
> This is my first bug reported to Kafka, so please let me know if I should be 
> following a different format. Thanks! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3457) KafkaConsumer.committed(...) hangs forever if port number is wrong

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3457.
--
Resolution: Fixed

This is addressed by KIP-266. 

> KafkaConsumer.committed(...) hangs forever if port number is wrong
> --
>
> Key: KAFKA-3457
> URL: https://issues.apache.org/jira/browse/KAFKA-3457
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Harald Kirsch
>Assignee: Liquan Pei
>Priority: Major
>
> Create a KafkaConsumer with default settings but with a wrong host:port 
> setting for bootstrap.servers. Have it in some consumer group, do not 
> subscribe or assign partitions.
> Then call .committed(...) for a topic/partition combination a few times. It 
> will hang on the 2nd or third call forever. In the debug log you will see 
> that it repeats connections all over again. I waited many minutes and it 
> never came back to throw an Exception.
> The connections problems should at least pop out on the WARNING log level. 
> Likely the connection problems should throw an exception eventually.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3011) Consumer.poll(0) blocks if Kafka not accessible

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3011.
--
Resolution: Fixed

This is addressed by KIP-266. 

> Consumer.poll(0) blocks if Kafka not accessible
> ---
>
> Key: KAFKA-3011
> URL: https://issues.apache.org/jira/browse/KAFKA-3011
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: all
>Reporter: Eric Bowman
>Priority: Major
>
> Because of this loop in ConsumerNetworkClient:
> {code:java}
> public void awaitMetadataUpdate() {
> int version = this.metadata.requestUpdate();
> do {
> poll(Long.MAX_VALUE);
> } while (this.metadata.version() == version);
> }
> {code}
> ...if Kafka is not reachable (perhaps not running, or other network issues, 
> unclear), then KafkaConsumer.poll(0) will block until it's available.
> I suspect that better behavior would be an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3727) Consumer.poll() stuck in loop on non-existent topic manually assigned

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3727.
--
Resolution: Fixed

This is addressed by KIP-266.

> Consumer.poll() stuck in loop on non-existent topic manually assigned
> -
>
> Key: KAFKA-3727
> URL: https://issues.apache.org/jira/browse/KAFKA-3727
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>Priority: Critical
>
> The behavior of a consumer on poll() for a non-existing topic is surprisingly 
> different/inconsistent 
> between a consumer that subscribed to the topic and one that had the 
> topic-partition manually assigned.
> The "subscribed" consumer will return an empty collection
> The "assigned" consumer will *loop forever* - this feels a bug to me.
> sample snippet to reproduce:
> {quote}
> KafkaConsumer assignKc = new KafkaConsumer<>(props1);
> KafkaConsumer subsKc = new KafkaConsumer<>(props2);
> List tps = new ArrayList<>();
> tps.add(new TopicPartition("topic-not-exists", 0));
> assignKc.assign(tps);
> subsKc.subscribe(Arrays.asList("topic-not-exists"));
> System.out.println("* subscribe k consumer ");
> ConsumerRecords crs2 = subsKc.poll(1000L); 
> print("subscribeKc", crs2); // returns empty
> System.out.println("* assign k consumer ");
> ConsumerRecords crs1 = assignKc.poll(1000L); 
>// will loop forever ! 
> print("assignKc", crs1);
> {quote}
> the logs for the "assigned" consumer show:
> [2016-05-18 17:33:09,907] DEBUG Updated cluster metadata version 8 to 
> Cluster(nodes = [192.168.10.18:9093 (id: 0 rack: null)], partitions = []) 
> (org.apache.kafka.clients.Metadata)
> [2016-05-18 17:33:09,908] DEBUG Partition topic-not-exists-0 is unknown for 
> fetching offset, wait for metadata refresh 
> (org.apache.kafka.clients.consumer.internals.Fetcher)
> [2016-05-18 17:33:10,010] DEBUG Sending metadata request 
> {topics=[topic-not-exists]} to node 0 (org.apache.kafka.clients.NetworkClient)
> [2016-05-18 17:33:10,011] WARN Error while fetching metadata with correlation 
> id 9 : {topic-not-exists=UNKNOWN_TOPIC_OR_PARTITION} 
> (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3899) Consumer.poll() stuck in loop if wrong credentials are supplied

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3899.
--
   Resolution: Fixed
 Assignee: (was: Edoardo Comar)
Fix Version/s: 2.0.0

This is addressed by KIP-266. 

> Consumer.poll() stuck in loop if wrong credentials are supplied
> ---
>
> Key: KAFKA-3899
> URL: https://issues.apache.org/jira/browse/KAFKA-3899
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0, 0.10.1.0
>Reporter: Edoardo Comar
>Priority: Major
> Fix For: 2.0.0
>
>
> With the broker configured to use SASL PLAIN ,
> if the client is supplying wrong credentials, 
> a consumer calling poll()
> is stuck forever and only inspection of DEBUG-level logging can tell what is 
> wrong.
> [2016-06-24 12:15:16,455] DEBUG Connection with localhost/127.0.0.1 
> disconnected (org.apache.kafka.common.network.Selector)
> java.io.EOFException
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:239)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:182)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:64)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:318)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:183)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:973)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3503) Throw exception on missing/non-existent partition

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3503.
--
Resolution: Duplicate

> Throw exception on missing/non-existent  partition 
> ---
>
> Key: KAFKA-3503
> URL: https://issues.apache.org/jira/browse/KAFKA-3503
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
> Environment: Java 1.8.0_60. 
> Linux  centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC
>Reporter: Navin Markandeya
>Priority: Minor
>
> I would expect some exception to be thrown when a consumer tries to access a 
> non-existent partition. I did not see anyone reporting it. If is already 
> known, please link and close this.
> {code}
> java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> {code}
> {code}
> Linux centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> {code}
> {{Kafka release - kafka_2.11-0.9.0.1}}
> Created a topic with 3 partitions
> {code}
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic mytopic
> Topic:mytopic PartitionCount:3ReplicationFactor:1 Configs:
>   Topic: mytopic  Partition: 0Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 1Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 2Leader: 0   Replicas: 0 Isr: 0
> {code}
> Consumer application does not terminate. A thrown exception that there is no 
> such {{mytopic-3}} partition, that would help to gracefully terminate it.
> {code}
> 14:08:02.885 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - Fetching 
> committed offsets for partitions: [mytopic-3, mytopic-0, mytopic-1, mytopic-2]
> 14:08:02.887 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-sent
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-received
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.latency
> 14:08:02.888 [main] DEBUG o.apache.kafka.clients.NetworkClient - Completed 
> connection to node 2147483647
> 14:08:02.891 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - No committed 
> offset for partition mytopic-3
> 14:08:02.891 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Resetting 
> offset for partition mytopic-3 to latest offset.
> 14:08:02.892 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:02.965 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=4,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804082965, sendTimeMs=0) to node 0
> 14:08:02.968 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 3 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:02.968 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:03.071 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=5,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804083071, sendTimeMs=0) to node 0
> 14:08:03.073 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 4 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:03.073 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3177) Kafka consumer can hang when position() is called on a non-existing partition.

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3177.
--
Resolution: Fixed

This is addressed by KIP-266.

> Kafka consumer can hang when position() is called on a non-existing partition.
> --
>
> Key: KAFKA-3177
> URL: https://issues.apache.org/jira/browse/KAFKA-3177
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 2.0.0
>
>
> This can be easily reproduced as following:
> {code}
> {
> ...
> consumer.assign(SomeNonExsitingTopicParition);
> consumer.position();
> ...
> }
> {code}
> It seems when position is called we will try to do the following:
> 1. Fetch committed offsets.
> 2. If there is no committed offsets, try to reset offset using reset 
> strategy. in sendListOffsetRequest(), if the consumer does not know the 
> TopicPartition, it will refresh its metadata and retry. In this case, because 
> the partition does not exist, we fall in to the infinite loop of refreshing 
> topic metadata.
> Another orthogonal issue is that if the topic in the above code piece does 
> not exist, position() call will actually create the topic due to the fact 
> that currently topic metadata request could automatically create the topic. 
> This is a known separate issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6581) ConsumerGroupCommand hangs if even one of the partition is unavailable

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6581.
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.2)
   2.0.0

This is addressed by KIP-266.

> ConsumerGroupCommand hangs if even one of the partition is unavailable
> --
>
> Key: KAFKA-6581
> URL: https://issues.apache.org/jira/browse/KAFKA-6581
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, core, tools
>Affects Versions: 0.10.0.0
>Reporter: Sahil Aggarwal
>Priority: Minor
> Fix For: 2.0.0
>
>
> ConsumerGroupCommand.scala uses consumer internally to get the position for 
> each partition but if the partition is unavailable the call 
> consumer.position(topicPartition) will block indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6973) setting invalid timestamp causes Kafka broker restart to fail

2018-06-01 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6973.

   Resolution: Fixed
Fix Version/s: 2.0.0

> setting invalid timestamp causes Kafka broker restart to fail
> -
>
> Key: KAFKA-6973
> URL: https://issues.apache.org/jira/browse/KAFKA-6973
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.1.0
>Reporter: Paul Brebner
>Assignee: huxihx
>Priority: Critical
> Fix For: 2.0.0
>
>
> Setting timestamp to invalid value causes Kafka broker to fail upon startup. 
> E.g.
> ./kafka-topics.sh --create --zookeeper localhost --topic duck3 --partitions 1 
> --replication-factor 1 --config message.timestamp.type=boom
>  
> Also note that the docs says the parameter name is 
> log.message.timestamp.type, but this is silently ignored.
> This works with no error for the invalid timestamp value. But next time you 
> restart Kafka:
>  
> [2018-05-29 13:09:05,806] FATAL [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: Invalid timestamp type boom
> at org.apache.kafka.common.record.TimestampType.forName(TimestampType.java:39)
> at kafka.log.LogConfig.(LogConfig.scala:94)
> at kafka.log.LogConfig$.fromProps(LogConfig.scala:279)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:786)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:785)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at kafka.log.LogManager$.apply(LogManager.scala:785)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> [2018-05-29 13:09:05,811] INFO [KafkaServer id=0] shutting down 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6936) Scala API Wrapper for Streams uses default serializer for table aggregate

2018-06-01 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6936.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> Scala API Wrapper for Streams uses default serializer for table aggregate
> -
>
> Key: KAFKA-6936
> URL: https://issues.apache.org/jira/browse/KAFKA-6936
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Daniel Heinrich
>Priority: Major
> Fix For: 2.0.0
>
>
> On of the goals of the Scala API is to not fall back on the configured 
> default serializer, but let the compiler provide them through implicits.
> The aggregate method on KGroupedStream misses to achieve this goal.
> Compared to the Java API is this behavior very supprising, because no other 
> stream operation falls back to the default serializer and a developer assums, 
> that the compiler checks for the correct serializer type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6980) Recommended MaxDirectMemorySize for consumers

2018-06-01 Thread John Lu (JIRA)
John Lu created KAFKA-6980:
--

 Summary: Recommended MaxDirectMemorySize for consumers
 Key: KAFKA-6980
 URL: https://issues.apache.org/jira/browse/KAFKA-6980
 Project: Kafka
  Issue Type: Wish
  Components: consumer, documentation
Affects Versions: 0.10.2.0
 Environment: CloudFoundry
Reporter: John Lu


We are observing that when MaxDirectMemorySize is set too low, our Kafka 
consumer threads are failing and encountering the following exception:

{{java.lang.OutOfMemoryError: Direct buffer memory}}

Is there a way to estimate how much direct memory is required for optimal 
performance?  In the documentation, it is suggested that the amount of memory 
required is  [Number of Partitions * max.partition.fetch.bytes].  

When we pick a value slightly above that, we no longer encounter the error, but 
if we double or triple the number, our throughput improves drastically.  So we 
are wondering if there is another setting or parameter to consider?

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3743) kafka-server-start.sh: Unhelpful error message

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3743.
--
Resolution: Duplicate

> kafka-server-start.sh: Unhelpful error message
> --
>
> Key: KAFKA-3743
> URL: https://issues.apache.org/jira/browse/KAFKA-3743
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.0.0
>Reporter: Magnus Edenhill
>Priority: Minor
>
> When trying to start Kafka from an uncompiled source tarball rather than the 
> binary the kafka-server-start.sh command gives a mystical error message:
> ```
> $ bin/kafka-server-start.sh config/server.properties 
> Error: Could not find or load main class config.server.properties
> ```
> This could probably be improved to say something closer to the truth.
> This is on 0.10.0.0-rc6 tarball from github.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Kafka 2.0.0 Release Progress

2018-06-01 Thread Rajini Sivaram
Hi all,

I have moved one accepted KIP without a PR to the next release. We still
have quite a few KIPs with PRs that are being reviewed, but haven't yet
been merged. I have left all of these in since I think these are minor
features that are not risky. Please ensure that all KIPs for 2.0.0 have
been merged by Tuesday the 5th of June. Any remaining KIPs will be moved to
the next release.

The KIPs still in progress are:

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-183+-+Change+
   PreferredReplicaLeaderElectionCommand+to+use+AdminClient
   

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-
   206%3A+Add+support+for+UUID+serialization+and+deserialization
   

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-
   211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
   

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-
   235%3A+Add+DNS+alias+support+for+secured+connection
   

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-
   277+-+Fine+Grained+ACL+for+CreateTopics+API
   

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-
   282%3A+Add+the+listener+name+to+the+authentication+context
   

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-
   290%3A+Support+for+wildcard+suffixed+ACLs
   

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-
   294+-+Enable+TLS+hostname+verification+by+default
   

   - https://cwiki.apache.org/confluence/display/KAFKA/KIP-
   306%3A+Configuration+for+Delaying+Response+to+Failed+
   Client+Authentication
   


Thanks,

Rajini

On Tue, May 29, 2018 at 11:19 AM, Rajini Sivaram 
wrote:

> Hi all,
>
> Since yesterday was a holiday in the US and the UK, we will move feature
> freeze to tomorrow, giving an additional day to merge major features.
> Please ensure that all major features have been merged and that any minor
> features have PRs by EOD tomorrow.
>
> Thank you,
>
> Rajini
>
> On Wed, May 23, 2018 at 4:11 PM, Rajini Sivaram 
> wrote:
>
>> Hi all,
>>
>> Thanks to everyone for participating in KIP discussions and voting in
>> time for KIP-freeze. The list of KIPs that have passed vote are in the
>> release plan:
>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>>
>>
>> Feature freeze is on the 29th of May, just a week away, so please ensure
>> that all major features have been merged and that any minor features
>> have PRs by this time.
>>
>> We currently have 58 issues in progress and 108 to do:
>>
>> https://issues.apache.org/jira/projects/KAFKA/versions/12341243
>>
>>
>> If issue owners can start moving non-critical issues out of 2.0.0 that
>> would be great, otherwise anything that is left to do will be moved to
>> the next release.
>>
>> Thank you,
>>
>> Rajini
>>
>
>


RE: [VOTE] KIP-235 Add DNS alias support for secured connection

2018-06-01 Thread Skrzypek, Jonathan
Hi,

I have updated the PR to leverage an enum to drive client dns lookup behaviour.

There are only 2 options for now, but this could be extended to support other 
behaviours (see attached from KIP-302 thread).
2 current options :

resolve.canonical.bootstrap.servers.only : perform canonical name resolution on 
items of bootstrap.servers
disabled : current default behaviour, no lookup - this is the default value

As usual naming things is hard so happy to take suggestions.

https://github.com/apache/kafka/pull/4485


Jonathan Skrzypek

-Original Message-
From: Ismael Juma [mailto:ism...@juma.me.uk]
Sent: 23 May 2018 01:29
To: dev
Subject: Re: [VOTE] KIP-235 Add DNS alias support for secured connection

Thanks for the KIP. I think this is a good and low risk change. It would be
good to ensure that it works well with KIP-302 if we think that makes sense
too. In any case, +1 (binding).

Ismael

On Fri, Mar 23, 2018 at 12:05 PM Skrzypek, Jonathan <
jonathan.skrzy...@gs.com> wrote:

> Hi,
>
> I would like to start a vote for KIP-235
>
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D235-253A-2BAdd-2BDNS-2Balias-2Bsupport-2Bfor-2Bsecured-2Bconnection=DwIBaQ=7563p3e2zaQw0AB1wrFVgyagb2IE5rTZOYPxLxfZlX4=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E=uPuVydDxaxC8XfuCt8ZC6C93Gx50DlpAJaTqvC80Z_0=KJTm2ESwlBAOOKVyS_Cbt_9WdGyazwlxdWFCvkEvtd4=
>
> This is a proposition to add an option for reverse dns lookup of
> bootstrap.servers hosts, allowing the use of dns aliases on clusters using
> SASL authentication.
>
>
>
>



Your Personal Data: We may collect and process information about you that may 
be subject to data protection laws. For more information about how we use and 
disclose your personal data, how we protect your information, our legal basis 
to use your information, your rights and who you can contact, please refer to: 
www.gs.com/privacy-notices
--- Begin Message ---
Hi,

As Rajini suggested in the thread for KIP 235 (attached), we could try to have 
an enum that would drive what does the client expands/resolves.

I suggest a client config called client.dns.lookup with different values 
possible :

- no : no dns lookup
- hostnames.only : perform dns lookup on both bootstrap.servers and advertised 
listeners
- canonical.hostnames.only : perform dns lookup on both bootstrap.servers and 
advertised listeners
- bootstrap.hostnames.only : perform dns lookup on bootstrap.servers list and 
expand it
- bootstrap.canonical.hostnames.only : perform dns lookup on bootstrap.servers 
list and expand it
- advertised.listeners.hostnames.only : perform dns lookup on advertised 
listeners
- advertised.listeners.canonical.hostnames.only : perform dns lookup on 
advertised listeners

I realize this is a bit heavy but this gives users the ability to pick and 
choose.
I didn't include a setting to mix hostnames and canonical hostnames as I'm not 
sure there would be a valid use case.

Alternatively, to have less possible values, we could have 2 parameters :

- dns.lookup.type with values : hostname / canonical.host.name
- dns.lookup.behaviour : bootstrap.servers, advertised.listeners, both

Thoughts ?

Jonathan Skrzypek


-Original Message-
From: Edoardo Comar [mailto:edoco...@gmail.com]
Sent: 17 May 2018 23:50
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved 
IP addresses

Hi Jonathan,

> A solution might be to expose to users the choice of using hostname or 
> canonical host name on both sides.
> Say having one setting that collapses functionalities from both KIPs 
> (bootstrap expansion + advertised lookup)
> and an additional parameter that defines how the resolution is performed, 
> using getCanonicalHostName() or not.

thanks sounds to me *less* simple than independent config options, sorry.

I would like to say once again that by itself  KIP-302 only speeds up
the client behavior that can happen anyway when the client restarts
multiple times,
as every time there is no guarantee that - in presence of multiple A
DNS records - the same IP is returned. Attempting to use additiona IPs
if the first fail just makes client recovery faster.

cheers
Edo

On 17 May 2018 at 12:12, Skrzypek, Jonathan  wrote:
> Yes, makes sense.
> You mentioned multiple times you see no overlap and no issue with your KIP, 
> and that they solve different use cases.
>
> Appreciate you have an existing use case that would work, but we need to make 
> sure this isn't confusing to users and that any combination will always work, 
> across security protocols.
>
> A solution might be to expose to users the choice of using hostname or 
> canonical host name on both sides.
> Say having one setting that collapses functionalities from both KIPs 
> (bootstrap expansion + advertised lookup) and an additional parameter that 
> defines how the resolution is performed, using