[jira] [Commented] (KAFKA-6762) log-cleaner thread terminates due to java.lang.IllegalStateException

2018-05-06 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16465471#comment-16465471
 ] 

Manikumar commented on KAFKA-6762:
--

looks like this is related to KAFKA-6834

> log-cleaner thread terminates due to java.lang.IllegalStateException
> 
>
> Key: KAFKA-6762
> URL: https://issues.apache.org/jira/browse/KAFKA-6762
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.0
> Environment: os: GNU/Linux 
> arch: x86_64 
> Kernel: 4.9.77 
> jvm: OpenJDK 1.8.0
>Reporter: Ricardo Bartolome
>Priority: Major
>
> We are experiencing some problems with kafka log-cleaner thread on Kafka 
> 1.0.0. We have planned to update this cluster to 1.1.0 by next week in order 
> to fix KAFKA-6683, but until then we can only confirm that it happens in 
> 1.0.0.
> log-cleaner thread crashes after a while with the following error:
> {code:java}
> [2018-03-28 11:14:40,199] INFO Cleaner 0: Beginning cleaning of log 
> __consumer_offsets-31. (kafka.log.LogCleaner)
> [2018-03-28 11:14:40,199] INFO Cleaner 0: Building offset map for 
> __consumer_offsets-31... (kafka.log.LogCleaner)
> [2018-03-28 11:14:40,218] INFO Cleaner 0: Building offset map for log 
> __consumer_offsets-31 for 16 segments in offset range [1612869, 14282934). 
> (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,566] INFO Cleaner 0: Offset map for log 
> __consumer_offsets-31 complete. (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,566] INFO Cleaner 0: Cleaning log __consumer_offsets-31 
> (cleaning prior to Tue Mar 27 09:25:09 GMT 2018, discarding tombstones prior 
> to Sat Feb 24 11:04:21 GMT 2018
> )... (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,567] INFO Cleaner 0: Cleaning segment 0 in log 
> __consumer_offsets-31 (largest timestamp Fri Feb 23 11:40:54 GMT 2018) into 
> 0, discarding deletes. (kafka.log.LogClea
> ner)
> [2018-03-28 11:14:58,570] INFO Cleaner 0: Growing cleaner I/O buffers from 
> 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,576] INFO Cleaner 0: Growing cleaner I/O buffers from 
> 524288bytes to 112 bytes. (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,593] ERROR [kafka-log-cleaner-thread-0]: Error due to 
> (kafka.log.LogCleaner)
> java.lang.IllegalStateException: This log contains a message larger than 
> maximum allowable size of 112.
> at kafka.log.Cleaner.growBuffers(LogCleaner.scala:622)
> at kafka.log.Cleaner.cleanInto(LogCleaner.scala:574)
> at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:459)
> at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:396)
> at kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:395)
> at scala.collection.immutable.List.foreach(List.scala:389)
> at kafka.log.Cleaner.doClean(LogCleaner.scala:395)
> at kafka.log.Cleaner.clean(LogCleaner.scala:372)
> at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:263)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:243)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
> [2018-03-28 11:14:58,601] INFO [kafka-log-cleaner-thread-0]: Stopped 
> (kafka.log.LogCleaner)
> [2018-04-04 14:25:12,773] INFO The cleaning for partition 
> __broker-11-health-check-0 is aborted and paused (kafka.log.LogCleaner)
> [2018-04-04 14:25:12,773] INFO Compaction for partition 
> __broker-11-health-check-0 is resumed (kafka.log.LogCleaner)
> [2018-04-04 14:25:12,774] INFO The cleaning for partition 
> __broker-11-health-check-0 is aborted (kafka.log.LogCleaner)
> [2018-04-04 14:25:22,850] INFO Shutting down the log cleaner. 
> (kafka.log.LogCleaner)
> [2018-04-04 14:25:22,850] INFO [kafka-log-cleaner-thread-0]: Shutting down 
> (kafka.log.LogCleaner)
> [2018-04-04 14:25:22,850] INFO [kafka-log-cleaner-thread-0]: Shutdown 
> completed (kafka.log.LogCleaner)
> {code}
> What we know so far is:
>  * We are unable to reproduce it yet in a consistent manner.
>  * It only happens in the PRO cluster and not in the PRE cluster for the same 
> customer (which message payloads are very similar)
>  * Checking our Kafka logs, it only happened on the internal topics 
> *__consumer_offsets-**
>  * When we restart the broker process the log-cleaner starts working again 
> but it can take between 3 minutes and some hours to die again.
>  * We workaround it by temporary increasing the message.max.bytes and 
> replica.fetch.max.bytes values to 10485760 (10MB) from default 112 (~1MB).
> ** Before message.max.bytes = 10MB, we tried to match message.max.size with 
> the value of replica.fetch.max.size (1048576), but log-cleaned died with the 
> same error but different limit.
>  ** This allowed the log-cleaner not to die

[jira] [Resolved] (KAFKA-3921) Periodic refresh of metadata causes spurious log messages

2018-05-06 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3921.
--
Resolution: Auto Closed

Closing inactive issue. The old producer is no longer supported. Please upgrade 
to the Java producer whenever possible.


> Periodic refresh of metadata causes spurious log messages
> -
>
> Key: KAFKA-3921
> URL: https://issues.apache.org/jira/browse/KAFKA-3921
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Steven Schlansker
>Priority: Major
>
> Kafka cluster metadata has a configurable expiry period.  (I don't understand 
> why this is -- cluster updates can happen at any time, and we have to pick 
> those up quicker than every 10 minutes?  But this ticket isn't about that.)
> When this interval expires, the ClientUtils class spins up a SyncProducer, 
> which sends a special message to retrieve metadata.  The producer is then 
> closed immediately after processing this message.
> This causes the SyncProducer to log both a connection open and close at INFO 
> level:
> {code}
> 2016-06-30T17:50:19.408Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.client.ClientUtils$ - 
> Fetching metadata from broker BrokerEndPoint(2,broker-3.mycorp.com,9092) with 
> correlation id 17188 for 1 topic(s) Set(logstash)
> 2016-06-30T17:50:19.410Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Connected to broker-3.mycorp.com:9092 for producing
> 2016-06-30T17:50:19.411Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Disconnecting from broker-3.mycorp.com:9092
> 2016-06-30T17:50:19.411Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Disconnecting from broker-14.mycorp.com:9092
> 2016-06-30T17:50:19.411Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Disconnecting from broker-logkafka-13.mycorp.com:9092
> 2016-06-30T17:50:19.411Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Disconnecting from broker-12.mycorp.com:9092
> 2016-06-30T17:50:19.413Z INFO <> 
> [ProducerSendThread-central-buzzsaw-1-myhost] kafka.producer.SyncProducer - 
> Connected to broker-12.mycorp.com:9092 for producing
> {code}
> When you are reading the logs, this appears periodically.  We've had more 
> than one administrator then think that the cluster is unhealthy, and client 
> connections are getting dropped -- it's disconnecting from the broker so 
> frequently!  What is wrong???  But in reality, it is just this harmless / 
> expected metadata update.
> Can we tweak the log levels so that the periodic background refresh does not 
> log unless something goes wrong?  The log messages are misleading and easy to 
> misinterpret.  I had to read the code pretty thoroughly to convince myself 
> that these messages are actually harmless.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6881) Kafka 1.1 Broker version crashes when deleting log

2018-05-07 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466911#comment-16466911
 ] 

Manikumar commented on KAFKA-6881:
--

By default Kafka is configured to store the data logs to /tmp/ directory.  /tmp 
gets cleared during system reboots/OS cleanups etc..
For production deployments, you will need to change the "log.dirs" property in 
your broker's server.properties file with valid dirs.

> Kafka 1.1 Broker version crashes when deleting log
> --
>
> Key: KAFKA-6881
> URL: https://issues.apache.org/jira/browse/KAFKA-6881
> Project: Kafka
>  Issue Type: Bug
> Environment: Linux
>Reporter: K B Parthasarathy
>Priority: Critical
>
> Hello
> We are running Kafka 1.1 version in Linux from past 3 weeks. Today Kafka 
> crashed. When we checked server.log file the following log was found
> [2018-05-07 16:53:06,721] ERROR Failed to clean up log for 
> __consumer_offsets-24 in dir /tmp/kafka-logs due to IOException 
> (kafka.server.LogDirFailureChannel)
>  java.nio.file.NoSuchFileException: 
> /tmp/kafka-logs/__consumer_offsets-24/.log
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>  at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>  at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>  at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)
>  at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
>  at kafka.log.Log.asyncDeleteSegment(Log.scala:1601)
>  at kafka.log.Log.$anonfun$replaceSegments$1(Log.scala:1653)
>  at kafka.log.Log.$anonfun$replaceSegments$1$adapted(Log.scala:1648)
>  at scala.collection.immutable.List.foreach(List.scala:389)
>  at kafka.log.Log.replaceSegments(Log.scala:1648)
>  at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:535)
>  at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:462)
>  at kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:461)
>  at scala.collection.immutable.List.foreach(List.scala:389)
>  at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
>  at kafka.log.Cleaner.clean(LogCleaner.scala:438)
>  at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
>  at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
>  Suppressed: java.nio.file.NoSuchFileException: 
> /tmp/kafka-logs/__consumer_offsets-24/.log -> 
> /tmp/kafka-logs/__consumer_offsets-24/.log.deleted
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>  at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>  at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
>  at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
>  ... 16 more
>  [2018-05-07 16:53:06,725] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir /tmp/kafka-logs (kafka.server.ReplicaManager)
>  [2018-05-07 16:53:06,762] INFO Stopping serving logs in dir /tmp/kafka-logs 
> (kafka.log.LogManager)
>  [2018-05-07 16:53:07,032] ERROR Shutdown broker because all log dirs in 
> /tmp/kafka-logs have failed (kafka.log.LogManager)
>  
> Please let me know what may be the issue
>  
> Partha



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6885) DescribeConfigs synonyms are are identical to parent entry for BROKER resources

2018-05-08 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467896#comment-16467896
 ] 

Manikumar commented on KAFKA-6885:
--

In case of topics,  there is a difference between names of topic override 
configs and server default configs (ex:  "max.message.bytes" is topic override 
config  name for  "message.max.bytes" server property). In this case synonym 
name will be different.

But in case of broker there is no name difference between Dynamic broker 
configs and server default configs (Both places we use "message.max.bytes"  
config name. We can differentiate by using synonyms source property 
(DEFAULT_CONFIG, STATIC_BROKER_CONFIG etc..)



> DescribeConfigs synonyms are are identical to parent entry for BROKER 
> resources
> ---
>
> Key: KAFKA-6885
> URL: https://issues.apache.org/jira/browse/KAFKA-6885
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.1.0
>Reporter: Magnus Edenhill
>Priority: Major
>
> The DescribeConfigs protocol response for BROKER resources returns synonyms 
> for various configuration entries, such as "log.dir".
> The list of synonyms returned are identical to their parent configuration 
> entry, rather than the actual synonyms.
> For example, for the broker "log.dir" config entry it returns one synonym, 
> also named "log.dir" rather than "log.dirs" or whatever the synonym is 
> supposed to be.
>  
> This does not seem to happen for TOPIC resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2017-07-12 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16083972#comment-16083972
 ] 

Manikumar commented on KAFKA-1696:
--

[~guozhang] This got delayed due to some internal works.  Will raise the first 
version PR by this month end.

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2017-07-12 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16083980#comment-16083980
 ] 

Manikumar commented on KAFKA-1499:
--

[~sludwig]  Yes, newly produced data will use latest compression.type. In 
compact mode, it is possible that old data may get compacted using new 
compression.type.

> Broker-side compression configuration
> -
>
> Key: KAFKA-1499
> URL: https://issues.apache.org/jira/browse/KAFKA-1499
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joel Koshy
>Assignee: Manikumar
>  Labels: newbie++
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1499_2014-08-15_14:20:27.patch, 
> KAFKA-1499_2014-08-21_21:44:27.patch, KAFKA-1499_2014-09-21_15:57:23.patch, 
> KAFKA-1499_2014-09-23_14:45:38.patch, KAFKA-1499_2014-09-24_14:20:33.patch, 
> KAFKA-1499_2014-09-24_14:24:54.patch, KAFKA-1499_2014-09-25_11:05:57.patch, 
> KAFKA-1499_2014-10-27_13:13:55.patch, KAFKA-1499_2014-12-16_22:39:10.patch, 
> KAFKA-1499_2014-12-26_21:37:51.patch, KAFKA-1499.patch, KAFKA-1499.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> A given topic can have messages in mixed compression codecs. i.e., it can
> also have a mix of uncompressed/compressed messages.
> It will be useful to support a broker-side configuration to recompress
> messages to a specific compression codec. i.e., all messages (for all
> topics) on the broker will be compressed to this codec. We could have
> per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2017-07-12 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16084244#comment-16084244
 ] 

Manikumar commented on KAFKA-1499:
--

[~sludwig] yes, you can override on topic level for compression.type. You can 
use kafka-configs.sh script to set the topic level for compression.typec config.
{code} bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics 
--entity-name my-topic
--alter --add-config compression.type=snappy
{code}

> Broker-side compression configuration
> -
>
> Key: KAFKA-1499
> URL: https://issues.apache.org/jira/browse/KAFKA-1499
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joel Koshy
>Assignee: Manikumar
>  Labels: newbie++
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1499_2014-08-15_14:20:27.patch, 
> KAFKA-1499_2014-08-21_21:44:27.patch, KAFKA-1499_2014-09-21_15:57:23.patch, 
> KAFKA-1499_2014-09-23_14:45:38.patch, KAFKA-1499_2014-09-24_14:20:33.patch, 
> KAFKA-1499_2014-09-24_14:24:54.patch, KAFKA-1499_2014-09-25_11:05:57.patch, 
> KAFKA-1499_2014-10-27_13:13:55.patch, KAFKA-1499_2014-12-16_22:39:10.patch, 
> KAFKA-1499_2014-12-26_21:37:51.patch, KAFKA-1499.patch, KAFKA-1499.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> A given topic can have messages in mixed compression codecs. i.e., it can
> also have a mix of uncompressed/compressed messages.
> It will be useful to support a broker-side configuration to recompress
> messages to a specific compression codec. i.e., all messages (for all
> topics) on the broker will be compressed to this codec. We could have
> per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4914) Partition re-assignment tool should check types before persisting state in ZooKeeper

2017-07-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-4914:
-
Fix Version/s: 1.0.0

> Partition re-assignment tool should check types before persisting state in 
> ZooKeeper
> 
>
> Key: KAFKA-4914
> URL: https://issues.apache.org/jira/browse/KAFKA-4914
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.1.1
>Reporter: Nick Travers
>Priority: Minor
> Fix For: 1.0.0
>
>
> The partition-reassignment too currently allows non-type-safe information to 
> be persisted into ZooKeeper, which can result in a ClassCastException at 
> runtime for brokers.
> Specifically, this occurred when the broker assignment field was a List of 
> Strings, instead of a List of Integers.
> {code}
> 2017-03-15 01:44:04,572 ERROR 
> [ZkClient-EventThread-36-samsa-zkserver.stage.sjc1.square:26101/samsa] 
> controller.ReplicaStateMachine$BrokerChangeListener - [BrokerChangeListener 
> on Controller 10]: Error while handling broker changes
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Integer
> at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:101)
> at 
> kafka.controller.KafkaController$$anonfun$8$$anonfun$apply$2.apply(KafkaController.scala:436)
> at 
> scala.collection.LinearSeqOptimized$class.exists(LinearSeqOptimized.scala:93)
> at scala.collection.immutable.List.exists(List.scala:84)
> at 
> kafka.controller.KafkaController$$anonfun$8.apply(KafkaController.scala:436)
> at 
> kafka.controller.KafkaController$$anonfun$8.apply(KafkaController.scala:435)
> at 
> scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
> at 
> scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
> at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
> at 
> kafka.controller.KafkaController.onBrokerStartup(KafkaController.scala:435)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:374)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:357)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:355)
> at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:843)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5644) Transient test failure: ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime

2017-07-26 Thread Manikumar (JIRA)
Manikumar created KAFKA-5644:


 Summary: Transient test failure: 
ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime
 Key: KAFKA-5644
 URL: https://issues.apache.org/jira/browse/KAFKA-5644
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Manikumar
Priority: Minor


{quote}
unit.kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToZonedDateTime 
FAILED
java.lang.AssertionError: Expected the consumer group to reset to when 
offset was 50.
at kafka.utils.TestUtils$.fail(TestUtils.scala:339)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:853)
at 
unit.kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime(ResetConsumerGroupOffsetTest.scala:188)
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4541) Add capability to create delegation token

2017-08-03 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-4541:


Assignee: Manikumar  (was: Ashish Singh)

> Add capability to create delegation token
> -
>
> Key: KAFKA-4541
> URL: https://issues.apache.org/jira/browse/KAFKA-4541
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ashish Singh
>Assignee: Manikumar
>
> Add request/ response and server side handling to create delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4544) Add system tests for delegation token based authentication

2017-08-03 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-4544:


Assignee: Manikumar

> Add system tests for delegation token based authentication
> --
>
> Key: KAFKA-4544
> URL: https://issues.apache.org/jira/browse/KAFKA-4544
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ashish Singh
>Assignee: Manikumar
>
> Add system tests for delegation token based authentication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5647) Use async ZookeeperClient for Admin operations

2017-08-04 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16114219#comment-16114219
 ] 

Manikumar commented on KAFKA-5647:
--

[~ijuma] [~onurkaraman]   Are we going to refactor ZkUtils, 
ZkNodeChangeNotificationListener etc.. to use async ZookeeperClient .  I have 
used zkUtils , ZkNodeChangeNotificationListener in my delegation token work.  I 
am planning to use async ZookeeperClient but I think we need to migrate 
existing utilities. Right? 

> Use async ZookeeperClient for Admin operations
> --
>
> Key: KAFKA-5647
> URL: https://issues.apache.org/jira/browse/KAFKA-5647
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5647) Use async ZookeeperClient for Admin operations

2017-08-04 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16114219#comment-16114219
 ] 

Manikumar edited comment on KAFKA-5647 at 8/4/17 10:50 AM:
---

[~ijuma] [~onurkaraman]   Are we going to refactor ZkUtils, 
ZkNodeChangeNotificationListener etc.. to use async ZookeeperClient .  I have 
used zkUtils , ZkNodeChangeNotificationListener in my delegation token work.  I 
am planning to use async ZookeeperClient but I think we need to first migrate 
existing utilities. Right? 


was (Author: omkreddy):
[~ijuma] [~onurkaraman]   Are we going to refactor ZkUtils, 
ZkNodeChangeNotificationListener etc.. to use async ZookeeperClient .  I have 
used zkUtils , ZkNodeChangeNotificationListener in my delegation token work.  I 
am planning to use async ZookeeperClient but I think we need to migrate 
existing utilities. Right? 

> Use async ZookeeperClient for Admin operations
> --
>
> Key: KAFKA-5647
> URL: https://issues.apache.org/jira/browse/KAFKA-5647
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5644) Transient test failure: ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime

2017-08-04 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5644:


Assignee: Manikumar

> Transient test failure: 
> ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime
> 
>
> Key: KAFKA-5644
> URL: https://issues.apache.org/jira/browse/KAFKA-5644
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
>
> {quote}
> unit.kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsToZonedDateTime FAILED
> java.lang.AssertionError: Expected the consumer group to reset to when 
> offset was 50.
> at kafka.utils.TestUtils$.fail(TestUtils.scala:339)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:853)
> at 
> unit.kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime(ResetConsumerGroupOffsetTest.scala:188)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3389) ReplicaStateMachine areAllReplicasForTopicDeleted check not handling well case when there are no replicas for topic

2017-08-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3389.
--
Resolution: Won't Fix

As mentioned in the previous comment, this may not be an issue.  Pl reopen if 
still exists

> ReplicaStateMachine areAllReplicasForTopicDeleted check not handling well 
> case when there are no replicas for topic
> ---
>
> Key: KAFKA-3389
> URL: https://issues.apache.org/jira/browse/KAFKA-3389
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.1
>Reporter: Stevo Slavic
>Assignee: Manikumar
>Priority: Minor
>
> Line ReplicaStateMachine.scala#L285
> {noformat}
> replicaStatesForTopic.forall(_._2 == ReplicaDeletionSuccessful)
> {noformat}
> which is return value of {{areAllReplicasForTopicDeleted}} function/check, 
> probably should better be checking for
> {noformat}
> replicaStatesForTopic.isEmpty || replicaStatesForTopic.forall(_._2 == 
> ReplicaDeletionSuccessful)
> {noformat}
> I noticed it because in controller logs I found entries like:
> {noformat}
> [2016-03-04 13:27:29,115] DEBUG [Replica state machine on controller 1]: Are 
> all replicas for topic foo deleted Map() 
> (kafka.controller.ReplicaStateMachine)
> {noformat}
> even though normally they look like:
> {noformat}
> [2016-03-04 09:33:41,036] DEBUG [Replica state machine on controller 1]: Are 
> all replicas for topic foo deleted Map([Topic=foo,Partition=0,Replica=0] -> 
> ReplicaDeletionStarted, [Topic=foo,Partition=0,Replica=3] -> 
> ReplicaDeletionStarted, [Topic=foo,Partition=0,Replica=1] -> 
> ReplicaDeletionSuccessful) (kafka.controller.ReplicaStateMachine)
> {noformat}
> This may cause topic deletion request never to be cleared from ZK even when 
> topic has been deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-11 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16123074#comment-16123074
 ] 

Manikumar commented on KAFKA-5714:
--

[~a...@confluent.io] From the code, I can see that we are allowing white spaces 
in the principal name.  Can you explain more about your observation/scenario?

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-11 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5714:


Assignee: Manikumar

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>Assignee: Manikumar
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-08-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127078#comment-16127078
 ] 

Manikumar commented on KAFKA-5734:
--

We can use JMAP command to output a histogram of java object heap. This will 
help us to analyze the heap memory usage.
Take periodic outputs and compare the outputs.

{quote}jdk/bin/jmap -histo:live PID{quote}


> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127513#comment-16127513
 ] 

Manikumar commented on KAFKA-5714:
--

 ZK based topic creation/deletion doesn't go through ACL authorization. Not 
sure how these are related. You can enable authorizer logs and to verify any 
deny operations.

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>Assignee: Manikumar
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1821) Example shell scripts broken

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1821.
--
Resolution: Fixed

It is working in newer Kafka versions.

> Example shell scripts broken
> 
>
> Key: KAFKA-1821
> URL: https://issues.apache.org/jira/browse/KAFKA-1821
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging, tools
>Affects Versions: 0.8.1.1
> Environment: Ubuntu 14.04, Linux 75477193b766 3.13.0-24-generic 
> #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux. 
> Scala: 2.8.0
>Reporter: Yong Fu
>Priority: Minor
>
> After run ./gradlew jarAll to generate all jars including for examples, I try 
> to run the producer-consumer demo from shell scripts. But it doesn't work and 
> throw  ClassNotFoundException.  It seems the shell scripts 
> (java-producer-consumer-demo and java-simple-consumer-demo) still work on the 
> library structure for sbt. So it cannot find jar files under new structure 
> forced by gradle. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1832.
--
Resolution: Fixed

Fixed in  KAFKA-1041

> Async Producer will cause 'java.net.SocketException: Too many open files' 
> when broker host does not exist
> -
>
> Key: KAFKA-1832
> URL: https://issues.apache.org/jira/browse/KAFKA-1832
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1, 0.8.1.1
> Environment: linux
>Reporter: barney
>Assignee: Jun Rao
>
> h3.How to replay the problem:
> * producer configuration:
> ** producer.type=async
> ** metadata.broker.list=not.existed.com:9092
> Make sure the host '*not.existed.com*' does not exist in DNS server or 
> /etc/hosts;
> * send a lot of messages continuously using the above producer
> It will cause '*java.net.SocketException: Too many open files*' after a 
> while, or you can use '*lsof -p $pid|wc -l*' to check the count of open files 
> which will be increasing as time goes by until it reaches the system 
> limit(check by '*ulimit -n*').
> h3.Problem cause:
> {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
> channel.connect(new InetSocketAddress(host, port))
> {code}
> this line will throw an exception 
> '*java.nio.channels.UnresolvedAddressException*' when broker host does not 
> exist, and at this same time the field '*connected*' is false;
> In *kafka.producer.SyncProducer*, '*disconnect()*' will not invoke 
> '*blockingChannel.disconnect()*' because '*blockingChannel.isConnected*' is 
> false which means the FileDescriptor will be created but never closed;
> h3.More:
> When the broker is an non-existent ip(for example: 
> metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the 
> problem will not appear;
> In *SocketChannelImpl.connect()*, '*Net.checkAddress()*' is not in try-catch 
> block but '*Net.connect()*' is in, that makes the difference;
> h3.Temporary Solution:
> {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
> try
> {
> channel.connect(new InetSocketAddress(host, port))
> }
> catch
> {
> case e: UnresolvedAddressException => 
> {
> disconnect();
> throw e
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2206) Add AlterConfig and DescribeConfig requests to Kafka

2017-08-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127676#comment-16127676
 ] 

Manikumar commented on KAFKA-2206:
--

this looks like is a duplicate of KIP-133/ KAFKA-3267. If so, we can close this 
jira.
cc [~ijuma] 

> Add AlterConfig and DescribeConfig requests to Kafka
> 
>
> Key: KAFKA-2206
> URL: https://issues.apache.org/jira/browse/KAFKA-2206
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>
> See 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration#KIP-21-DynamicConfiguration-ConfigAPI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2220) Improvement: Could we support rewind by time ?

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2220.
--
Resolution: Fixed

This got fixed in  KAFKA-4743 / KIP-122.

> Improvement: Could we support  rewind by time  ?
> 
>
> Key: KAFKA-2220
> URL: https://issues.apache.org/jira/browse/KAFKA-2220
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Li Junjun
> Attachments: screenshot.png
>
>
> Improvement: Support  rewind by time  !
> My scenarios as follow:
>A program read record from kafka  and process  then write to a dir in 
> HDFS like /hive/year=/month=xx/day=xx/hour=10 .  If  the program goes 
> down . I can restart it , so it read from last offset . 
> But  what if the program was config with wrong params , so I need remove  
> dir hour=10 and reconfig my program and  I  need to find  the offset where 
> hour=10 start  , but now I can't do this.
> And there are many  scenarios like this.
> so , can we  add  a time  partition , so  we can rewind by time ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2283) scheduler exception on non-controller node when shutdown

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2283.
--
Resolution: Fixed

> scheduler exception on non-controller node when shutdown
> 
>
> Key: KAFKA-2283
> URL: https://issues.apache.org/jira/browse/KAFKA-2283
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.2.1
> Environment: linux debian
>Reporter: allenlee
>Assignee: Neha Narkhede
>Priority: Minor
>
> When broker shutdown, there is an error log about 'Kafka scheduler has not 
> been started'.
> It only appears on non-controller node. If this broker is the controller, it 
> shutdown without warning log.
> IMHO, *autoRebalanceScheduler.shutdown()* should only valid for controller, 
> right?
> {quote}
> [2015-06-17 22:32:51,814] INFO Shutdown complete. (kafka.log.LogManager)
> [2015-06-17 22:32:51,815] WARN Kafka scheduler has not been started 
> (kafka.utils.Utils$)
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
> at 
> kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
> at 
> kafka.controller.KafkaController.shutdown(KafkaController.scala:664)
> at 
> kafka.server.KafkaServer$$anonfun$shutdown$8.apply$mcV$sp(KafkaServer.scala:285)
> at kafka.utils.Utils$.swallow(Utils.scala:172)
> at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
> at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
> at kafka.utils.Logging$class.swallow(Logging.scala:94)
> at kafka.utils.Utils$.swallow(Utils.scala:45)
> at kafka.server.KafkaServer.shutdown(KafkaServer.scala:285)
> at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
> at kafka.Kafka$$anon$1.run(Kafka.scala:42)
> [2015-06-17 22:32:51,818] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127737#comment-16127737
 ] 

Manikumar commented on KAFKA-5714:
--

By default, the SSL user name will be of the form 
"CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" , without 
any spaces.
Are you sure of spaces in your SSL username?

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>Assignee: Manikumar
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3087) Fix documentation for retention.ms property and update documentation for LogConfig.scala class

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3087.
--
Resolution: Fixed

This doc issue was fixed in newer Kafka versions.

> Fix documentation for retention.ms property and update documentation for 
> LogConfig.scala class
> --
>
> Key: KAFKA-3087
> URL: https://issues.apache.org/jira/browse/KAFKA-3087
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Raju Bairishetti
>Assignee: Jay Kreps
>Priority: Critical
>  Labels: documentation
>
> Log retention settings can be set it in broker and some properties can be 
> overriden at topic level. 
> |Property |Default|Server Default property| Description|
> |retention.ms|7 days|log.retention.minutes|This configuration controls the 
> maximum time we will retain a log before we will discard old log segments to 
> free up space if we are using the "delete" retention policy. This represents 
> an SLA on how soon consumers must read their data.|
> But retention.ms is in milli seconds not in minutes. So corresponding *Server 
> Default property* should be *log.retention.ms* instead of 
> *log.retention.minutes*.
> It would be better if we mention the if the time age is in 
> millis/minutes/hours in the documentation page and documenting in code as 
> well (Right now, it is saying *age in the code*. We should specify the *age 
> in time granularity).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-16 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128831#comment-16128831
 ] 

Manikumar commented on KAFKA-5714:
--

>>The point is, that I am expecting the same behavior, whether I put this name 
>>in server.properties with spaces, or without.
Ok..I got your point, but why are we expecting same behavior?  KafkaPricipal is 
formed from the name of the principal rececived from the underlying channel. In 
the case of SSL, it is string representation of the X.500 certificate.  This is 
comma separated attribute key/values string without any spaces. So we expect 
the same string to used in configs(super.users) and scripts (kafka-acls.sh). we 
also have PrincipalBuilder interface for any customization.

Not sure we want to trim white spaces from the principal name. let us hear 
others opinions on this. 


> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>Assignee: Manikumar
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4951) KafkaProducer may send duplicated message sometimes

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4951.
--
Resolution: Fixed

This scenario is handled in the Idempotent producer (KIP-98) released in Kafka 
0.11.0.0.  Pl reopen if you think the issue still exists

> KafkaProducer may send duplicated message sometimes
> ---
>
> Key: KAFKA-4951
> URL: https://issues.apache.org/jira/browse/KAFKA-4951
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: cuiyang
>
> I foud that KafkaProducer may send duplicated message sometimes, which is 
> happend when:
>  In Sender thread:
>  NetworkClient::poll()
>  -> this.selector.poll()//send the message, such as "abc", and 
> send it to broker successfully
>  -> handleTimedOutRequests(responses,updatedNow);  //Judge whether 
> the message  "abc" which is sent above is expired or timeout,  and the judge 
> is  based on the parameter  this.requestTimeoutMs and updatedNow;  
>  -> response.request().callback().onComplete()
>  -> 
> completeBatch(batch,Errors.NETWORK_EXCEPTION,-1L,correlationId,now);   //If 
> themessage was judged as expired, then it will be reenqueued and send 
> repeatly next loop;
>  -> this.accumulator.reenqueue(batch,now);
> The problem comes out:  If the message "abc" is sent successfully to broker, 
> but it may be judged to expired, so the message will be sent repeately next 
> loop, which make the message duplicated.
> I can reproduce this scenario normally.
> In my opinion, I think Send::handleTimedOutRequests() is not much useful, 
> because the response of sending request from broker is succesfully and has no 
> error, which means brokers persist it successfully. And this function  will 
> induce to the duplicated message problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4889) 2G8lc

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4889.
--
Resolution: Invalid

> 2G8lc
> -
>
> Key: KAFKA-4889
> URL: https://issues.apache.org/jira/browse/KAFKA-4889
> Project: Kafka
>  Issue Type: Task
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4847) 1Y30J

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4847.
--
Resolution: Invalid

> 1Y30J
> -
>
> Key: KAFKA-4847
> URL: https://issues.apache.org/jira/browse/KAFKA-4847
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4865) 2X8BF

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4865.
--
Resolution: Invalid

> 2X8BF
> -
>
> Key: KAFKA-4865
> URL: https://issues.apache.org/jira/browse/KAFKA-4865
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4804) TdOZY

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4804.
--
Resolution: Invalid

> TdOZY
> -
>
> Key: KAFKA-4804
> URL: https://issues.apache.org/jira/browse/KAFKA-4804
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4821) 9244L

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4821.
--
Resolution: Invalid

> 9244L
> -
>
> Key: KAFKA-4821
> URL: https://issues.apache.org/jira/browse/KAFKA-4821
> Project: Kafka
>  Issue Type: Task
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4813) 2h6R1

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4813.
--
Resolution: Invalid

> 2h6R1
> -
>
> Key: KAFKA-4813
> URL: https://issues.apache.org/jira/browse/KAFKA-4813
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4803) OT6Y1

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4803.
--
Resolution: Invalid

> OT6Y1
> -
>
> Key: KAFKA-4803
> URL: https://issues.apache.org/jira/browse/KAFKA-4803
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3796) SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3796.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists

> SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk
> --
>
> Key: KAFKA-3796
> URL: https://issues.apache.org/jira/browse/KAFKA-3796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, security
>Affects Versions: 0.10.0.0
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Trivial
>
> org.apache.kafka.common.network.SslTransportLayerTest > 
> testEndpointIdentificationDisabled FAILED
> java.net.BindException: Can't assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:48)
> at 
> org.apache.kafka.common.network.SslTransportLayerTest.testEndpointIdentificationDisabled(SslTransportLayerTest.java:120)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-08-17 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16130145#comment-16130145
 ] 

Manikumar commented on KAFKA-5734:
--

Most of the heap entries are related to metrics. May be related to metrics leak 
KAFKA-4629 ? Can you connect Jconsole and verify for any increase in metrics?
Hope you are taking this output when the heap is almost full.
How many brokers/topics/partitions are there in your setup? Are you running any 
consumers?  How much load are you generating?


> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-196) Topic creation fails on large values

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-196.
-
Resolution: Fixed

Topic MAX_NAME_LENGTH is set to 249 is newer Kafka verions.

> Topic creation fails on large values
> 
>
> Key: KAFKA-196
> URL: https://issues.apache.org/jira/browse/KAFKA-196
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Pierre-Yves Ritschard
> Attachments: 
> 0001-Set-a-hard-limit-on-topic-width-this-fixes-KAFKA-196.patch
>
>
> Since topic logs are stored in a directory holding the topic's name, creation 
> of the directory might fail for large strings.
> This is not a problem per-se but the exception thrown is rather cryptic and 
> hard to figure out for operations.
> I propose fixing this temporarily with a hard limit of 200 chars for topic 
> names, it would also be possible to hash the topic name.
> Another concern is that the exception raised stops the broker, effectively 
> creating  a simple DoS vector, I'm concerned about how tests or wrong client 
> library usage can take down the whole broker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-186) no clean way to getCompressionCodec from Java-the-language

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-186.
-
Resolution: Fixed

CompressionType Java class added in newer Kafka version.

> no clean way to getCompressionCodec from Java-the-language
> --
>
> Key: KAFKA-186
> URL: https://issues.apache.org/jira/browse/KAFKA-186
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.7
>Reporter: Chris Burroughs
>
> The obvious thing fails:
> CompressionCodec.getCompressionCodec(1) results in cannot find symbol
> symbol  : method getCompressionCodec(int)
> location: interface kafka.message.CompressionCodec
> Writing a switch statement with  kafka.message.NoCompressionCodec$.MODULE$ 
> and duplicating the logic in CompressionCodec.getCompressionCodec is no fun, 
> nor is creating a Hashtable just to call Utils.getCompressionCodec.  I'm not 
> sure if there is a magic keyword to make it easy for javac to understand 
> which CompressionCodec I'm referring to.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-236) Make 'autooffset.reset' accept a delay in addition to {smallest,largest}

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-236.
-
Resolution: Fixed

This can be achieved by using reset consumer group tool or 
KafkaConsumer.offsetsForTimes api in latest kafka versions.

> Make 'autooffset.reset' accept a delay in addition to {smallest,largest}
> 
>
> Key: KAFKA-236
> URL: https://issues.apache.org/jira/browse/KAFKA-236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mathias Herberts
>
> Add the possibilty to specify a delay in ms which would be used when 
> resetting offset.
> This would allow for example a client to specify it would like its offset to 
> be reset to the first offset before/after the current time - the given offset.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-276) Enforce max.message.size on the total message size, not just on payload size

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-276.
-
Resolution: Fixed

This was fixed in newer Kafka versions.

> Enforce max.message.size on the total message size, not just on payload size
> 
>
> Key: KAFKA-276
> URL: https://issues.apache.org/jira/browse/KAFKA-276
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.7
>Reporter: Neha Narkhede
>
> Today, the max.message.size config is enforced only on the payload size of 
> the message. But the actual message size is header size + payload size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-275) max.message.size is not enforced for compressed messages

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-275.
-
Resolution: Fixed

This issue is fixed in latest versions.  Please reopen if the issue still 
exists. 


> max.message.size is not enforced for compressed messages
> 
>
> Key: KAFKA-275
> URL: https://issues.apache.org/jira/browse/KAFKA-275
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.7
>Reporter: Neha Narkhede
>
> The max.message.size check is not performed for compressed messages, but only 
> for each message that forms a compressed message. Due to this, even if the 
> max.message.size is set to 1MB, the producer can technically send n 1MB 
> messages as one compressed message. This can cause memory issues on the 
> server as well as deserialization issues on the consumer. The consumer's 
> fetch size has to be > max.message.size in order to be able to read data. If 
> one message is larger than the fetch.size, the consumer will throw an 
> exception and cannot proceed until the fetch.size is increased. 
> Due to this bug, even if the fetch.size > max.message.size, the consumer can 
> still get stuck on a message that is larger than max.message.size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-359) Add message constructor that takes payload as a byte buffer

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-359.
-
Resolution: Fixed

This has been fixed in newer Kafka versions.

> Add message constructor that takes payload as a byte buffer
> ---
>
> Key: KAFKA-359
> URL: https://issues.apache.org/jira/browse/KAFKA-359
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>
> Currently, if a ByteBuffer is passed into Message(), it treats the buffer as 
> the message's buffer (including magic byte, meta data, etc) rather than the 
> payload. If you wish to construct a Message and provide just the payload, you 
> have to use a byte array, which results in an extra copy if your payload data 
> is already in a byte buffer.
> For optimization, it would be nice to also provide a constructor like:
> this(payload: ByteBuffer, isPayload: Boolean)
> The existing this(buffer: ByteBuffer) constructor could then just be changed 
> to this(buffer, false).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-357) Refactor zookeeper code in KafkaZookeeper into reusable components

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-357.
-
Resolution: Duplicate

Zookeeper related code is getting refactored in KAFKA-5027/KAFKA-5501

> Refactor zookeeper code in KafkaZookeeper into reusable components 
> ---
>
> Key: KAFKA-357
> URL: https://issues.apache.org/jira/browse/KAFKA-357
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Neha Narkhede
>
> Currently, we stuck a lot of zookeeper code in KafkaZookeeper. This includes 
> leader election, ISR maintenance etc. However, it will be good to wrap up 
> related code in separate components that make logical sense. A good example 
> of this is the ZKQueue data structure. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2218) reassignment tool needs to parse and validate the json

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2218.
--
Resolution: Duplicate

 PR is available for KAFKA-4914.

> reassignment tool needs to parse and validate the json
> --
>
> Key: KAFKA-2218
> URL: https://issues.apache.org/jira/browse/KAFKA-2218
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Priority: Critical
>
> Ran into a production issue with the broker.id being set to a string instead 
> of integer and the controller had nothing in the log and stayed stuck. 
> Eventually we saw this in the log of the brokers where coming from 
>   
> me11:42 AM
> [2015-05-23 15:41:05,863] 67396362 [ZkClient-EventThread-14-ERROR 
> org.I0Itec.zkclient.ZkEventThread - Error handling event ZkEvent[Data of 
> /admin/reassign_partitions changed sent to 
> kafka.controller.PartitionsReassignedListener@78c6aab8]
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Integer
>  at scala.runtime.BoxesRunTime.unboxToInt(Unknown Source)
>  at 
> kafka.controller.KafkaController$$anonfun$4.apply(KafkaController.scala:579)
> we then had to delete the znode from zookeeper (admin/reassign_partition) and 
> then fix the json and try it again



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5751) Kafka cannot start; corrupted index file(s)

2017-08-18 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16133441#comment-16133441
 ] 

Manikumar commented on KAFKA-5751:
--

Duplicate of KAFKA-5747

> Kafka cannot start; corrupted index file(s)
> ---
>
> Key: KAFKA-5751
> URL: https://issues.apache.org/jira/browse/KAFKA-5751
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.11.0.0
> Environment: Linux (RedHat 7)
>Reporter: Martin M
>Priority: Critical
>
> A system was running Kafka 0.11.0 and some applications that produce and 
> consume events.
> During the runtime, a power outage was experienced. Upon restart, Kafka did 
> not recover.
> Logs show repeatedly the messages below:
> *server.log*
> {noformat}
> [2017-08-15 15:02:26,374] FATAL [Kafka Server 1001], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'version': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
>   at 
> kafka.log.ProducerStateManager$.readSnapshot(ProducerStateManager.scala:289)
>   at 
> kafka.log.ProducerStateManager.loadFromSnapshot(ProducerStateManager.scala:440)
>   at 
> kafka.log.ProducerStateManager.truncateAndReload(ProducerStateManager.scala:499)
>   at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:327)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:314)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:272)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>   at kafka.log.Log.loadSegmentFiles(Log.scala:272)
>   at kafka.log.Log.loadSegments(Log.scala:376)
>   at kafka.log.Log.(Log.scala:179)
>   at kafka.log.Log$.apply(Log.scala:1580)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> *kafkaServer.out*
> {noformat}
> [2017-08-15 16:03:50,927] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index)
>  has non-zero size but the last offset is 0 which is no larger than the base 
> offset 0.}. deleting 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.timeindex,
>  
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index,
>  and 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.txnindex
>  and rebuilding index... (kafka.log.Log)
> [2017-08-15 16:03:50,931] INFO [Kafka Server 1001], shutting down 
> (kafka.server.KafkaServer)
> [2017-08-15 16:03:50,932] INFO Recovering unflushed segment 0 in log 
> session-manager.revoke_token_topic-7. (kafka.log.Log)
> [2017-08-15 16:03:50,935] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2017-08-15 16:03:50,936] INFO Loading producer state from offset 0 for 
> partition session-manager.revoke_token_topic-7 with message format version 2 
> (kafka.log.Log)
> [2017-08-15 16:03:50,937] INFO Completed load of log 
> session-manager.revoke_token_topic-7 with 1 log segments, log start offset 0 
> and log end offset 0 in 10 ms (kafka.log.Log)
> [2017-08-15 16:03:50,938] INFO Session: 0x1000f772d26063b closed 
> (org.apache.zookeeper.ZooKeeper)
> [2017-08-15 16:03:50,938] INFO EventThread shut down for session: 
> 0x1000f772d26063b (org.apache.zookeeper.ClientCnxn)
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5750) Elevate log messages for denials to WARN in SimpleAclAuthorizer class

2017-08-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5750:


Assignee: Manikumar

> Elevate log messages for denials to WARN in SimpleAclAuthorizer class
> -
>
> Key: KAFKA-5750
> URL: https://issues.apache.org/jira/browse/KAFKA-5750
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Phillip Walker
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> Currently, the authorizer logs all messages at DEBUG level and logs every 
> single authorization attempt, which can greatly decrease cluster performance, 
> especially when Mirrormaker also produces to that cluster. Many InfoSec 
> requirements, though, require that authorization denials be logged. The 
> proposed solution is to elevate any denial in SimpleAclAuthorizer and any 
> other relevant class to WARN while leaving approvals at their currently 
> logging levels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5737) KafkaAdminClient thread should be daemon

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5737.
--
   Resolution: Fixed
Fix Version/s: 1.0.0
   0.11.0.1

> KafkaAdminClient thread should be daemon
> 
>
> Key: KAFKA-5737
> URL: https://issues.apache.org/jira/browse/KAFKA-5737
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1, 1.0.0
>
>
> The admin client thread should be daemon, for consistency with the consumer 
> and producer threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5744:


Assignee: Manikumar  (was: Colin P. McCabe)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134008#comment-16134008
 ] 

Manikumar commented on KAFKA-5744:
--

ShellTest.testRunProgramWithErrorReturn  is failing on my machine
cc [~cmccabe] 

java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.kafka.common.utils.ShellTest.testRunProgramWithErrorReturn(ShellTest.java:70)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5744:


Assignee: Colin P. McCabe  (was: Manikumar)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3322) recurring errors

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3322.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> recurring errors
> 
>
> Key: KAFKA-3322
> URL: https://issues.apache.org/jira/browse/KAFKA-3322
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: kafka0.9.0 and zookeeper 3.4.6
>Reporter: jackie
>
> we're getting hundreds of these errs with kafka 0.8 and topics become 
> unavailable after running for a few days.  it looks like this 
> https://issues.apache.org/jira/browse/KAFKA-1314



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3327) Warning from kafka mirror maker about ssl properties not valid

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3327.
--
Resolution: Cannot Reproduce

mostly related to config issue.  Pl reopen if you think the issue still exists


> Warning from kafka mirror maker about ssl properties not valid
> --
>
> Key: KAFKA-3327
> URL: https://issues.apache.org/jira/browse/KAFKA-3327
> Project: Kafka
>  Issue Type: Test
>  Components: config
>Affects Versions: 0.9.0.1
> Environment: CentOS release 6.5
>Reporter: Munir Khan
>Priority: Minor
>  Labels: kafka, mirror-maker, ssl
>
> I am trying to run Mirror maker  over SSL. I have configured my broker 
> following the procedure described in this document 
> http://kafka.apache.org/documentation.html#security_overview 
> I get the following warning when I start the mirror maker:
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# bin/kafka-run-class.sh 
> kafka.tools.MirrorMaker --consumer.config 
> config/datapush-consumer-ssl.properties --producer.config 
> config/datapush-producer-ssl.properties --num.streams 2 --whitelist test1&
> [1] 4701
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# [2016-03-03 10:24:35,348] WARN 
> block.on.buffer.full config is deprecated and will be removed soon. Please 
> use max.block.ms (org.apache.kafka.clients.producer.KafkaProducer)
> [2016-03-03 10:24:35,523] WARN The configuration producer.type = sync was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration ssl.keypassword = test1234 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration compression.codec = none was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration serializer.class = 
> kafka.serializer.DefaultEncoder was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,617] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,753] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> However the Mirror maker is able to mirror data . If I remove the 
> configurations related to the warning messages from my producer  mirror maker 
> does not work . So it seems despite the warning shown above the 
> ssl.configuration properties are used somehow. 
> My question is these are those warnings harmless in this context ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3413) Load Error Message should be a Warning

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3413.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> Load Error Message should be a Warning
> --
>
> Key: KAFKA-3413
> URL: https://issues.apache.org/jira/browse/KAFKA-3413
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, offset manager
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Scott Reynolds
>Assignee: Neha Narkhede
>
> There is a Error message from AbstractReplicaFetcherThread that isn't really 
> an error.
> Each implementation
> of this thread can logs out when an error or fatal error occurs.
> ReplicaFetcherThread, has both warn, error and fatal in the
> handleOffsetOutOfRange method.
> ConsumerFetcherThread seems to reset itself without logging out an error.
> Seems that the Reset message isn't shouldn't be an error level as it
> doesn't indicate any real error.
> This patch makes it a warning: 
> https://github.com/apache/kafka/compare/trunk...SupermanScott:offset-reset-to-warn?diff=split&name=offset-reset-to-warn



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4078) VIP for Kafka doesn't work

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4078.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> VIP for Kafka  doesn't work 
> 
>
> Key: KAFKA-4078
> URL: https://issues.apache.org/jira/browse/KAFKA-4078
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: chao
>
> We create VIP for chao007kfk002.chao007.com, 9092 ,chao007kfk003.chao007.com, 
> 9092 ,chao007kfk001.chao007.com, 9092
> But we found that Kafka client API has some issues ,  client send metadata 
> update will return three brokers ,  so it will create three connections for 
> 001 002 003 
> When we change VIP to  chao008kfk002.chao008.com, 9092 
> ,chao008kfk003.chao008.com, 9092 ,chao008kfk001.chao008.com, 9092
> it still produce data to 007 
> The following is log information  
> sasl.kerberos.ticket.renew.window.factor = 0.8
> bootstrap.servers = [kfk.chao.com:9092]
> client.id = 
> 2016-08-23 07:00:48,451:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:623) - Initialize connection to node -1 for sending 
> metadata request
> 2016-08-23 07:00:48,452:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:487) - Initiating connection to node -1 at 
> kfk.chao.com:9092.
> 2016-08-23 07:00:48,463:DEBUG kafka-producer-network-thread | producer-1 
> (Metrics.java:201) - Added sensor with name node--1.bytes-sent
>   
>   
> 2016-08-23 07:00:48,489:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:619) - Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1},
>  body={topics=[chao_vip]}), isInitiatedByNetworkClient, 
> createdTimeMs=1471935648465, sendTimeMs=0) to node -1
> 2016-08-23 07:00:48,512:DEBUG kafka-producer-network-thread | producer-1 
> (Metadata.java:172) - Updated cluster metadata version 2 to Cluster(nodes = 
> [Node(1, chao007kfk002.chao007.com, 9092), Node(2, chao007kfk003.chao007.com, 
> 9092), Node(0, chao007kfk001.chao007.com, 9092)], partitions = 
> [Partition(topic = chao_vip, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = chao_vip, partition = 3, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = chao_vip, partition = 2, leader = 2, 
> replicas = [2,], isr = [2,], Partition(topic = chao_vip, partition = 1, 
> leader = 1, replicas = [1,], isr = [1,], Partition(topic = chao_vip, 
> partition = 4, leader = 1, replicas = [1,], isr = [1,]])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3953) start kafka fail

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3953.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> start kafka fail
> 
>
> Key: KAFKA-3953
> URL: https://issues.apache.org/jira/browse/KAFKA-3953
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.2
> Environment: Linux host-172-28-0-3 3.10.0-327.18.2.el7.x86_64 #1 SMP 
> Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: ffh
>
> kafka start fail. error messege:
> [2016-07-12 03:57:32,717] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> [2016-07-12 03:57:33,124] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> config:
> # Generated by Apache Ambari. Tue Jul 12 03:18:02 2016
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> auto.create.topics.enable=true
> auto.leader.rebalance.enable=true
> broker.id=0
> compression.type=producer
> controlled.shutdown.enable=true
> controlled.shutdown.max.retries=3
> controlled.shutdown.retry.backoff.ms=5000
> controller.message.queue.size=10
> controller.socket.timeout.ms=3
> default.replication.factor=1
> delete.topic.enable=false
> external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec
> external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
> fetch.purgatory.purge.interval.requests=1
> kafka.ganglia.metrics.group=kafka
> kafka.ganglia.metrics.host=localhost
> kafka.ganglia.metrics.port=8671
> kafka.ganglia.metrics.reporter.enabled=true
> kafka.metrics.reporters=
> kafka.timeline.metrics.host=
> kafka.timeline.metrics.maxRowCacheSize=1
> kafka.timeline.metrics.port=
> kafka.timeline.metrics.reporter.enabled=true
> kafka.timeline.metrics.reporter.sendInterval=5900
> leader.imbalance.check.interval.seconds=300
> leader.imbalance.per.broker.percentage=10
> listeners=PLAINTEXT://host-172-28-0-3:6667
> log.cleanup.interval.mins=10
> log.dirs=/kafka-logs
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.bytes=-1
> log.retention.hours=168
> log.roll.hours=168
> log.segment.bytes=1073741824
> message.max.bytes=100
> min.insync.replicas=1
> num.io.threads=8
> num.network.threads=3
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> num.replica.fetchers=1
> offset.metadata.max.bytes=4096
> offsets.commit.required.acks=-1
> offsets.commit.timeout.ms=5000
> offsets.load.buffer.size=5242880
> offsets.retention.check.interval.ms=60
> offsets.retention.minutes=8640
> offsets.topic.compression.codec=0
> offsets.topic.num.partitions=50
> offsets.topic.replication.factor=3
> offsets.topic.segment.bytes=104857600
> principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal
> producer.purgatory.purge.interval.requests=1
> queued.max.requests=500
> replica.fetch.max.bytes=1048576
> replica.fetch.min.bytes=1
> replica.fetch.wait.max.ms=500
> replica.high.watermark.c

[jira] [Resolved] (KAFKA-3951) kafka.common.KafkaStorageException: I/O exception in append to log

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3951.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> kafka.common.KafkaStorageException: I/O exception in append to log
> --
>
> Key: KAFKA-3951
> URL: https://issues.apache.org/jira/browse/KAFKA-3951
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.1
>Reporter: wanzi.zhao
> Attachments: server-1.properties, server.properties
>
>
> I have two brokers in the same server using two ports,10.45.33.195:9092 and 
> 10.45.33.195:9093.They use two log directory "log.dirs=/tmp/kafka-logs" and 
> "log.dirs=/tmp/kafka-logs-1".When I shutdown my consumer application(java 
> api)  then change a groupId and restart it,my kafka brokers will stop 
> working, this is the stack trace I get
> [2016-07-11 17:02:47,314] INFO [Group Metadata Manager on Broker 0]: Loading 
> offsets and group metadata from [__consumer_offsets,0] 
> (kafka.coordinator.GroupMetadataManager)
> [2016-07-11 17:02:47,955] FATAL [Replica Manager on Broker 0]: Halting due to 
> unrecoverable I/O error while handling produce request:  
> (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-38'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:228)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:429)
> at 
> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:280)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs/__consumer_offsets-38/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1495) Kafka Example SimpleConsumerDemo

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1495.
--
Resolution: Won't Fix

> Kafka Example SimpleConsumerDemo 
> -
>
> Key: KAFKA-1495
> URL: https://issues.apache.org/jira/browse/KAFKA-1495
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
> Environment: Mac OS
>Reporter: darion yaphet
>Assignee: Jun Rao
>
> Offical SimpleConsumerDemo  under 
> kafka-0.8.1.1-src/examples/src/main/java/kafka/examples  running on my 
> machine . I found  under /tmp/kafka-logs has two directory  topic2-0 and 
> topic2-1  and 
> one is empty 
> ➜  kafka-logs  ls -lF  topic2-0  topic2-1
> topic2-0:
> total 21752
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel651109  6 17 18:44 .log
> topic2-1:
> total 20480
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel 0  6 17 17:34 .log 
> Is it a bug  or  something should  config in source code?
> thank you 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2053) Make initZk a protected function

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2053.
--
Resolution: Won't Fix

 Pl reopen if you think the requirement still exists

> Make initZk a protected function
> 
>
> Key: KAFKA-2053
> URL: https://issues.apache.org/jira/browse/KAFKA-2053
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Christian Kampka
>Priority: Minor
> Attachments: make-initzk-protected
>
>
> In our environment, we have established an external procedure to notify 
> clients of changes in the zookeeper cluster configuration, especially 
> appearance and disappearance of nodes. it has also become quite common to run 
> Kafka as an embedded service (especially in tests).
> When doing so, it would makes things easier if it were possible to manipulate 
> the creation of the zookeeper client to supply Kafka with a specialized 
> ZooKeeper client that is adjusted to our needs but of course API compatible 
> with the ZkClient.
> Therefore, I would like to propose to make the initZk method protected so we 
> will be able to simply override it for client creation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2093) Remove logging error if we throw exception

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2093.
--
Resolution: Won't Fix

Scala producer is deprecated. Pl reopen if you think the issue still exists


> Remove logging error if we throw exception
> --
>
> Key: KAFKA-2093
> URL: https://issues.apache.org/jira/browse/KAFKA-2093
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ivan Balashov
>Priority: Trivial
>
> On failure, kafka producer logs error AND throws exception. This can pose 
> problems, since client application cannot flexibly control if a particular 
> exception should be logged, and logging becomes all-or-nothing choice for 
> particular logger.
> We must remove logging error if we decide to throw exception.
> Some examples of this:
> kafka.client.ClientUtils$:89
> kafka.producer.SyncProducer:103
> If no one has objections, I can search around for other cases of logging + 
> throwing which should also be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2296) Not able to delete topic on latest kafka

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2296.
--
Resolution: Duplicate

> Not able to delete topic on latest kafka
> 
>
> Key: KAFKA-2296
> URL: https://issues.apache.org/jira/browse/KAFKA-2296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Andrew M
>
> Was able to reproduce [inability to delete 
> topic|https://issues.apache.org/jira/browse/KAFKA-1397?focusedCommentId=14491442&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14491442]
>  on running cluster with kafka 0.8.2.1.
> Cluster consist from 2 c3.xlarge aws instances with sufficient storage 
> attached. All communication between nodes goes through aws vpc
> Some warns from logs:
> {noformat}[Controller-1234-to-broker-4321-send-thread], Controller 1234 epoch 
> 20 fails to send request 
> Name:UpdateMetadataRequest;Version:0;Controller:1234;ControllerEpoch:20;CorrelationId:24047;ClientId:id_1234-host_1.2.3.4-port_6667;AliveBrokers:id:1234,host:1.2.3.4,port:6667,id:4321,host:4.3.2.1,port:6667;PartitionState:[topic_name,45]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,27]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,17]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,49]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,7]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,26]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,62]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,18]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,36]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,29]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,53]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,52]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,2]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,12]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,33]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,14]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,63]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,30]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,6]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,28]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,38]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,24]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,31]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,4]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,20]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,54]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,432

[jira] [Resolved] (KAFKA-2231) Deleting a topic fails

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2231.
--
Resolution: Cannot Reproduce

Topic deletion is more stable in latest releases. Pl reopen if you think the 
issue still exists

> Deleting a topic fails
> --
>
> Key: KAFKA-2231
> URL: https://issues.apache.org/jira/browse/KAFKA-2231
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: Windows 8.1
>Reporter: James G. Haberly
>Priority: Minor
>
> delete.topic.enable=true is in config\server.properties.
> Using --list shows the topic "marked for deletion".
> Stopping and restarting kafka and zookeeper does not delete the topic; it 
> remains "marked for deletion".
> Trying to recreate the topic fails with "Topic XXX already exists".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2289) KafkaProducer logs erroneous warning on startup

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2289.
--
Resolution: Fixed

This has been fixed.

> KafkaProducer logs erroneous warning on startup
> ---
>
> Key: KAFKA-2289
> URL: https://issues.apache.org/jira/browse/KAFKA-2289
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Henning Schmiedehausen
>Priority: Trivial
>
> When creating a new KafkaProducer using the 
> KafkaProducer(KafkaConfig, Serializer, Serializer) constructor, Kafka 
> will list the following lines, which are harmless but are still at WARN level:
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> value.serializer = class  was supplied but isn't a known config.
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> key.serializer = class  was supplied but isn't a known config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5714:


Assignee: (was: Manikumar)

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5751) Kafka cannot start; corrupted index file(s)

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5751.
--
Resolution: Duplicate

> Kafka cannot start; corrupted index file(s)
> ---
>
> Key: KAFKA-5751
> URL: https://issues.apache.org/jira/browse/KAFKA-5751
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.11.0.0
> Environment: Linux (RedHat 7)
>Reporter: Martin M
>Priority: Critical
>
> A system was running Kafka 0.11.0 and some applications that produce and 
> consume events.
> During the runtime, a power outage was experienced. Upon restart, Kafka did 
> not recover.
> Logs show repeatedly the messages below:
> *server.log*
> {noformat}
> [2017-08-15 15:02:26,374] FATAL [Kafka Server 1001], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'version': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
>   at 
> kafka.log.ProducerStateManager$.readSnapshot(ProducerStateManager.scala:289)
>   at 
> kafka.log.ProducerStateManager.loadFromSnapshot(ProducerStateManager.scala:440)
>   at 
> kafka.log.ProducerStateManager.truncateAndReload(ProducerStateManager.scala:499)
>   at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:327)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:314)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:272)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>   at kafka.log.Log.loadSegmentFiles(Log.scala:272)
>   at kafka.log.Log.loadSegments(Log.scala:376)
>   at kafka.log.Log.(Log.scala:179)
>   at kafka.log.Log$.apply(Log.scala:1580)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> *kafkaServer.out*
> {noformat}
> [2017-08-15 16:03:50,927] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index)
>  has non-zero size but the last offset is 0 which is no larger than the base 
> offset 0.}. deleting 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.timeindex,
>  
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index,
>  and 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.txnindex
>  and rebuilding index... (kafka.log.Log)
> [2017-08-15 16:03:50,931] INFO [Kafka Server 1001], shutting down 
> (kafka.server.KafkaServer)
> [2017-08-15 16:03:50,932] INFO Recovering unflushed segment 0 in log 
> session-manager.revoke_token_topic-7. (kafka.log.Log)
> [2017-08-15 16:03:50,935] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2017-08-15 16:03:50,936] INFO Loading producer state from offset 0 for 
> partition session-manager.revoke_token_topic-7 with message format version 2 
> (kafka.log.Log)
> [2017-08-15 16:03:50,937] INFO Completed load of log 
> session-manager.revoke_token_topic-7 with 1 log segments, log start offset 0 
> and log end offset 0 in 10 ms (kafka.log.Log)
> [2017-08-15 16:03:50,938] INFO Session: 0x1000f772d26063b closed 
> (org.apache.zookeeper.ZooKeeper)
> [2017-08-15 16:03:50,938] INFO EventThread shut down for session: 
> 0x1000f772d26063b (org.apache.zookeeper.ClientCnxn)
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5686) Documentation inconsistency on the "Compression"

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5686:


Assignee: Manikumar

> Documentation inconsistency on the "Compression"
> 
>
> Key: KAFKA-5686
> URL: https://issues.apache.org/jira/browse/KAFKA-5686
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.11.0.0
>Reporter: Seweryn Habdank-Wojewodzki
>Assignee: Manikumar
>Priority: Minor
>
> At the page:
> https://kafka.apache.org/documentation/
> There is a sentence:
> {{Kafka supports GZIP, Snappy and LZ4 compression protocols. More details on 
> compression can be found here.}}
> Especially link under the word *here* is describing very old compression 
> settings, which is false in case of version 0.11.x.y.
> JAVA API:
> Also it would be nice to clearly state if *compression.type* uses only case 
> sensitive String as a value or if it is recommended to use e.g. 
> {{CompressionType.GZIP.name}} for JAVA API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4856) Calling KafkaProducer.close() from multiple threads may cause spurious error

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-4856:


Assignee: Manikumar

> Calling KafkaProducer.close() from multiple threads may cause spurious error
> 
>
> Key: KAFKA-4856
> URL: https://issues.apache.org/jira/browse/KAFKA-4856
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0, 0.10.0.0, 0.10.2.0
>Reporter: Xavier Léauté
>Assignee: Manikumar
>Priority: Minor
>
> Calling {{KafkaProducer.close()}} from multiple threads simultaneously may 
> cause the following harmless error message to be logged. There appears to be 
> a race-condition in {{AppInfoParser.unregisterAppInfo}} that we don't guard 
> against. 
> {noformat}
> WARN Error unregistering AppInfo mbean 
> (org.apache.kafka.common.utils.AppInfoParser:71)
> javax.management.InstanceNotFoundException: 
> kafka.producer:type=app-info,id=
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
> at 
> org.apache.kafka.common.utils.AppInfoParser.unregisterAppInfo(AppInfoParser.java:69)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:735)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:686)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:665)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-2254) The shell script should be optimized , even kafka-run-class.sh has a syntax error.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-2254:


Assignee: Manikumar

> The shell script should be optimized , even kafka-run-class.sh has a syntax 
> error.
> --
>
> Key: KAFKA-2254
> URL: https://issues.apache.org/jira/browse/KAFKA-2254
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2.1
> Environment: linux
>Reporter: Bo Wang
>Assignee: Manikumar
>  Labels: client-script, kafka-run-class.sh, shell-script
> Attachments: kafka-shell-script.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  kafka-run-class.sh 128 line has a syntax error(missing a space):
> 127-loggc)
> 128 if [ -z "$KAFKA_GC_LOG_OPTS"] ; then
> 129GC_LOG_ENABLED="true"
> 130 fi
> And use the ShellCheck to check the shell scripts, the results shows some 
> errors 、 warnings and notes:
> https://github.com/koalaman/shellcheck/wiki/SC2068
> https://github.com/koalaman/shellcheck/wiki/Sc2046
> https://github.com/koalaman/shellcheck/wiki/Sc2086



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5401) Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5401.
--
Resolution: Duplicate

> Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe
> -
>
> Key: KAFKA-5401
> URL: https://issues.apache.org/jira/browse/KAFKA-5401
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.1
> Environment: SLES 11 , Kakaf Over TLS 
>Reporter: PaVan
>  Labels: security
>
> SLES 11 
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
>  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>  at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:194)
>  at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:148)
>  at 
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
>  at org.apache.kafka.common.network.Selector.close(Selector.java:442)
>  at org.apache.kafka.common.network.Selector.poll(Selector.java:310)
>  at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>  at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5547) Return topic authorization failed if no topic describe access

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5547:


Assignee: Manikumar

> Return topic authorization failed if no topic describe access
> -
>
> Key: KAFKA-5547
> URL: https://issues.apache.org/jira/browse/KAFKA-5547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>  Labels: security, usability
> Fix For: 1.0.0
>
>
> We previously made a change to several of the request APIs to return 
> UNKNOWN_TOPIC_OR_PARTITION if the principal does not have Describe access to 
> the topic. The thought was to avoid leaking information about which topics 
> exist. The problem with this is that a client which sees this error will just 
> keep retrying because it is usually treated as retriable. It seems, however, 
> that we could return TOPIC_AUTHORIZATION_FAILED instead and still avoid 
> leaking information as long as we ensure that the Describe authorization 
> check comes before the topic existence check. This would avoid the ambiguity 
> on the client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4840) There are are still cases where producer buffer pool will not remove waiters.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-4840:


   Resolution: Fixed
 Assignee: Sean McCauliff
Fix Version/s: 0.11.0.0

> There are are still cases where producer buffer pool will not remove waiters.
> -
>
> Key: KAFKA-4840
> URL: https://issues.apache.org/jira/browse/KAFKA-4840
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Sean McCauliff
>Assignee: Sean McCauliff
> Fix For: 0.11.0.0
>
>
> There are several problems dealing with errors in  BufferPool.allocate(int 
> size, long maxTimeToBlockMs):
> * The accumulated number of bytes are not put back into the available pool 
> when an exception happens and a thread is waiting for bytes to become 
> available.  This will cause the capacity of the buffer pool to decrease over 
> time any time a timeout is hit within this method.
> * If a Throwable other than InterruptedException is thrown out of await() for 
> some reason or if there is an exception thrown in the corresponding finally 
> block around the await(), for example if waitTime.record(.) throws an 
> exception, then the waiters are not removed from the waiters deque.
> * On timeout or other exception waiters could be signaled, but are not.  If 
> no other buffers are freed then the next waiting thread will also timeout and 
> so on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5239) Producer buffer pool allocates memory inside a lock.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5239.
--
   Resolution: Fixed
 Assignee: Sean McCauliff
Fix Version/s: 1.0.0

> Producer buffer pool allocates memory inside a lock.
> 
>
> Key: KAFKA-5239
> URL: https://issues.apache.org/jira/browse/KAFKA-5239
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Sean McCauliff
>Assignee: Sean McCauliff
> Fix For: 1.0.0
>
>
> KAFKA-4840 placed the ByteBuffer allocation inside the critical section.  
> Previously byte buffer allocation happened outside of the critical section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5239) Producer buffer pool allocates memory inside a lock.

2017-08-22 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16137163#comment-16137163
 ] 

Manikumar commented on KAFKA-5239:
--

[~ijuma]  Should this be in 0.11.0.1?  KAFKA-4840 was released in 0.11.0.0

> Producer buffer pool allocates memory inside a lock.
> 
>
> Key: KAFKA-5239
> URL: https://issues.apache.org/jira/browse/KAFKA-5239
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Sean McCauliff
>Assignee: Sean McCauliff
> Fix For: 1.0.0
>
>
> KAFKA-4840 placed the ByteBuffer allocation inside the critical section.  
> Previously byte buffer allocation happened outside of the critical section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4823) Creating Kafka Producer on application running on Java older version

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4823.
--
Resolution: Won't Fix

Kafka Dropped support for Java 1.6 from 0.9 release. You can try Rest 
Proxy/Other language libraries.  Please reopen if you think otherwise

> Creating Kafka Producer on application running on Java older version
> 
>
> Key: KAFKA-4823
> URL: https://issues.apache.org/jira/browse/KAFKA-4823
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
> Environment: Application running on Java 1.6
>Reporter: live2code
>
> I have an application running on Java 1.6 which cannot be upgraded.This 
> application need to have interfaces to post (producer )messages to Kafka 
> server remote box.Also receive messages as consumer .The code runs fine from 
> my local env which is on java 1.7.But the same producer and consumer fails 
> when executed within the application with the error
> Caused by: java.lang.UnsupportedClassVersionError: JVMCFRE003 bad major 
> version; class=org/apache/kafka/clients/producer/KafkaProducer, offset=6
> Is there someway I can still do it ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3927) kafka broker config docs issue

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3927.
--
Resolution: Later

Yes, These changes are done in KAFKA-615.  Please reopen if the issue still 
exists. 


> kafka broker config docs issue
> --
>
> Key: KAFKA-3927
> URL: https://issues.apache.org/jira/browse/KAFKA-3927
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.10.0.0
>Reporter: Shawn Guo
>Priority: Minor
>
> https://kafka.apache.org/documentation.html#brokerconfigs
> log.flush.interval.messages 
> default value is "9223372036854775807"
> log.flush.interval.ms 
> default value is null
> log.flush.scheduler.interval.ms 
> default value is "9223372036854775807"
> etc. obviously these default values are incorrect. how these doc get 
> generated ? it looks confusing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3800) java client can`t poll msg

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3800.
--
Resolution: Cannot Reproduce

 Please reopen if the issue still exists. 


> java client can`t poll msg
> --
>
> Key: KAFKA-3800
> URL: https://issues.apache.org/jira/browse/KAFKA-3800
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: java8,win7 64
>Reporter: frank
>Assignee: Neha Narkhede
>
> i use hump topic name, after poll msg is null.eg: Test_4 why?
> all low char is ok. i`m try nodejs,kafka-console-consumers.bat is ok



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3653) expose the queue size in ControllerChannelManager

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3653.
--
Resolution: Fixed

Fixed in KAFKA-5135/KIP-143

> expose the queue size in ControllerChannelManager
> -
>
> Key: KAFKA-3653
> URL: https://issues.apache.org/jira/browse/KAFKA-3653
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Gwen Shapira
>
> Currently, ControllerChannelManager maintains a queue per broker. If the 
> queue fills up, metadata propagation to the broker is delayed. It would be 
> useful to expose a metric on the size on the queue for monitoring.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3473) Add controller channel manager request queue time metric.

2017-08-22 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16137281#comment-16137281
 ] 

Manikumar commented on KAFKA-3473:
--

@ijuma Is this covered in KAFKA-5135/KIP-143?

> Add controller channel manager request queue time metric.
> -
>
> Key: KAFKA-3473
> URL: https://issues.apache.org/jira/browse/KAFKA-3473
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Dong Lin
>
> Currently controller appends the requests to brokers into controller channel 
> manager queue during state transition. i.e. the state transition are 
> propagated asynchronously. We need to track the request queue time on the 
> controller side to see how long the state propagation is delayed after the 
> state transition finished on the controller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3139) JMX metric ProducerRequestPurgatory doesn't exist, docs seem wrong.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3139.
--
Resolution: Fixed

Fixed in KAFKA-4252

> JMX metric ProducerRequestPurgatory doesn't exist, docs seem wrong.
> ---
>
> Key: KAFKA-3139
> URL: https://issues.apache.org/jira/browse/KAFKA-3139
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> The docs say that there is a JMX metric 
> {noformat}
> kafka.server:type=ProducerRequestPurgatory,name=PurgatorySize
> {noformat}
> But that doesn't seem to work. Using jconsole to inspect our kafka broker, it 
> seems like the right metric is
> {noformat}
> kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce
> {noformat}
> And there are also variants of the above for Fetch, Heartbeat, and Rebalance.
> Is the fix to simply update the docs? From what I can see, the docs for this 
> don't seem auto-generated from code. If it is a simple doc fix, I would like 
> to take this JIRA.
> Btw, what is NumDelayedOperations, and how is it different from PurgatorySize?
> {noformat}
> kafka.server:type=DelayedOperationPurgatory,name=NumDelayedOperations,delayedOperation=Produce
> {noformat}
> And should I also update the docs for that?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2017-08-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138257#comment-16138257
 ] 

Manikumar commented on KAFKA-5590:
--

Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need boker Pricipal credentials to make changes to ZK. 


> Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin
> ---
>
> Key: KAFKA-5590
> URL: https://issues.apache.org/jira/browse/KAFKA-5590
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
> Environment: kafka and ranger under ambari
>Reporter: Chaofeng Zhao
>
> Hi:
> Recently I develop some applications about kafka under ranger. But when I 
> set enable ranger kafka plugin I can not delete kafka topic completely even 
> though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
> plugin it must be authrized. How can I delete kafka topic completely under 
> ranger. Thank you.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2017-08-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138257#comment-16138257
 ] 

Manikumar edited comment on KAFKA-5590 at 8/23/17 11:56 AM:


Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need broker Principal credentials to make changes to ZK. 



was (Author: omkreddy):
Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need boker Pricipal credentials to make changes to ZK. 


> Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin
> ---
>
> Key: KAFKA-5590
> URL: https://issues.apache.org/jira/browse/KAFKA-5590
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
> Environment: kafka and ranger under ambari
>Reporter: Chaofeng Zhao
>
> Hi:
> Recently I develop some applications about kafka under ranger. But when I 
> set enable ranger kafka plugin I can not delete kafka topic completely even 
> though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
> plugin it must be authrized. How can I delete kafka topic completely under 
> ranger. Thank you.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4425.
--
Resolution: Not A Problem

Fixed as per [~Fristi] comments

> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1497) Change producer load-balancing algorithm in MirrorMaker

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1497.
--
Resolution: Fixed

MIrrorMaker now uses single producer instance.

> Change producer load-balancing algorithm in MirrorMaker
> ---
>
> Key: KAFKA-1497
> URL: https://issues.apache.org/jira/browse/KAFKA-1497
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Ivan Kunz
>
> Currently the MirrorMaker uses the following way of spreading the load into 
> configured producers :
> val producerId = 
> Utils.abs(java.util.Arrays.hashCode(msgAndMetadata.key)) % producers.size()
> This way if the producer side of MM uses different than the default 
> "partitioner.class" messages within the same partition can get re-ordered. 
> Also hashCode does not produce the same results on different machines 
> (verified by testing) so cannot be safely used for partitioning between 
> distributed systems connected via MM (for us message order preservation 
> within a partition is a critical feature).
> It would be great if the code above is changed to utilize the configured 
> "partitioner.class". 
> Something along the lines of  :
> At the initialization:
>   mmpartitioner = 
> Utils.createObject[Partitioner](config.partitionerClass, config.props)  
> During the processing:
> val producerId = 
> mmpartitioner.partition(msgAndMetadata.key,producers.size())
> This way the messages consumed and produced by MM can remain in the same 
> order.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1339) Time based offset retrieval seems broken

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1339.
--
Resolution: Fixed

Time-based offset retrieval is improved with the introduction of message 
timestamp.  Pl reopen if you think the issue still exists


> Time based offset retrieval seems broken
> 
>
> Key: KAFKA-1339
> URL: https://issues.apache.org/jira/browse/KAFKA-1339
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
> Environment: Linux
>Reporter: Frank Varnavas
>Priority: Minor
>
> The kafka PartitionOffsetRequest takes a time parameter.  It seems broken to 
> me.
> There are two magic values
>   -2 returns the oldest  available offset
>   -1 returns the newest available offset
>   Otherwise the value is time since epoch in millisecs 
> (System.currentTimeMillis())
> The granularity is limited to the granularity of the log files
> These are the log segments for the partition I tested
>   Time now is about 17:07
>   Time shown is last modify time
>   File name has the starting offset number
>   You can see that the current one started about 13:40
> 1073742047 Mar 24 02:52 04740823.log
> 1073759588 Mar 24 11:25 04831581.log
> 1073782532 Mar 24 16:31 04916313.log
> 1073741985 Mar 25 09:11 05066939.log
> 1073743756 Mar 25 13:39 05158529.log
>  778424349 Mar 25 17:07 05214225.log
> The below shows the returned offset for an input time = (current time - 
> [0..23] hours)
> Even 1 second less than the current time returns the previous segment, even 
> though that segment ended 2.5 hours earlier.
> I think the result is off by 1 log segment. i.e. offset 1-3 should have been 
> from 5214225, 4-7 should have been from 5158529
> 0 -> 5214225
> 1 -> 5158529
> 2 -> 5158529
> 3 -> 5158529
> 4 -> 5066939
> 5 -> 5066939
> 6 -> 5066939
> 7 -> 5066939
> 8 -> 4973490
> 9 -> 4973490
> 10 -> 4973490
> 11 -> 4973490
> 12 -> 4973490
> 13 -> 4973490
> 14 -> 4973490
> 15 -> 4973490
> 16 -> 4916313
> 17 -> 4916313
> 18 -> 4916313
> 19 -> 4916313
> 20 -> 4916313
> 21 -> 4916313
> 22 -> 4916313
> 23 -> 4916313



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-962) Add list topics to ClientUtils

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-962.
-
Resolution: Fixed

Topic management methods are added to new admin client.

> Add list topics to ClientUtils
> --
>
> Key: KAFKA-962
> URL: https://issues.apache.org/jira/browse/KAFKA-962
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Jakob Homan
>Assignee: Jakob Homan
>
> Currently there is no programmatic way to get a list of topics supported 
> directly by Kafka (one can talk to ZooKeeper directly).  There is a CLI tool 
> for this ListTopicCommand, but it'd be good to provide this directly to 
> clients as an API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-958) Please compile list of key metrics on the broker and client side and put it on a wiki

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-958.
-
Resolution: Fixed

Key metrics are listed on monitoring section of  Kafka documentation page

> Please compile list of key metrics on the broker and client side and put it 
> on a wiki
> -
>
> Key: KAFKA-958
> URL: https://issues.apache.org/jira/browse/KAFKA-958
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.0
>Reporter: Vadim
>Assignee: Joel Koshy
>Priority: Minor
>
> Please compile list of important metrics that need to be monitored by 
> companies  to insure healthy operation of the kafka service



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3332) Consumer can't consume messages from zookeeper chroot

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3332.
--
Resolution: Cannot Reproduce

> Consumer can't consume messages from zookeeper chroot
> -
>
> Key: KAFKA-3332
> URL: https://issues.apache.org/jira/browse/KAFKA-3332
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.2, 0.9.0.1
> Environment: RHEL 6.X, OS X
>Reporter: Sergey Vergun
>Assignee: Neha Narkhede
>
> I have faced issue when consumer can't consume messages from zookeeper 
> chroot. It was tested on Kafka 0.8.2.2 and Kafka 0.9.0.1
> My zookeeper options into server.properties:
> $cat config/server.properties | grep zookeeper
> zookeeper.connect=localhost:2181/kafka-cluster/kafka-0.9.0.1
> zookeeper.session.timeout.ms=1
> zookeeper.connection.timeout.ms=1
> zookeeper.sync.time.ms=2000
> I can create successfully a new topic
> $kafka-topics.sh --create --partition 3 --replication-factor 1 --topic 
> __TEST-Topic_1 --zookeeper localhost:2181/kafka-cluster/kafka-0.9.0.1
> Created topic "__TEST-Topic_1".
> and produce messages into it
> $kafka-console-producer.sh --topic __TEST-Topic_1 --broker-list localhost:9092
> 1
> 2
> 3
> 4
> 5
> In Kafka Manager I see that messages was delivered:
> Sum of partition offsets  5
> But I can't consume the messages via kafka-console-consumer
> $kafka-console-consumer.sh --topic TEST-Topic_1 --zookeeper 
> localhost:2181/kafka-cluster/kafka-0.9.0.1 --from-beginning
> The consumer is present in zookeeper
> [zk: localhost:2181(CONNECTED) 10] ls /kafka-cluster/kafka-0.9.0.1/consumers
> [console-consumer-62895] 
> [zk: localhost:2181(CONNECTED) 12] ls 
> /kafka-cluster/kafka-0.9.0.1/consumers/console-consumer-62895/ids
> [console-consumer-62895_SV-Macbook-1457097451996-64640cc1] 
> If I reconfigure kafka cluster with zookeeper chroot "/" then everything is 
> ok.
> $cat config/server.properties | grep zookeeper
> zookeeper.connect=localhost:2181
> zookeeper.session.timeout.ms=1
> zookeeper.connection.timeout.ms=1
> zookeeper.sync.time.ms=2000
> $kafka-console-producer.sh --topic __TEST-Topic_1 --broker-list localhost:9092
> 1
> 2
> 3
> 4
> 5
> $kafka-console-consumer.sh --topic TEST-Topic_1 --zookeeper localhost:2181 
> --from-beginning
> 1
> 2
> 3
> 4
> 5
> Is it bug or my mistake?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-627) Make UnknownTopicOrPartitionException a WARN in broker

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-627.
-
Resolution: Fixed

Not observed on latest versions

> Make UnknownTopicOrPartitionException a WARN in broker
> --
>
> Key: KAFKA-627
> URL: https://issues.apache.org/jira/browse/KAFKA-627
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
> Environment: Kafka 0.8, RHEL6, Java 1.6
>Reporter: Chris Riccomini
>
> Currently, when sending messages to a topic that doesn't yet exist, the 
> broker spews out these "errors" as it tries to auto-create new topics. I 
> spoke with Neha, and she said that this should be a warning, not an error.
> Could you please change it to something less scary, if, in fact, it's not 
> scary.
> 2012/11/14 22:38:53.238 INFO [LogManager] [kafka-request-handler-6] [kafka] 
> []  [Log Manager on Broker 464] Created log for 'firehoseReads'-5
> 2012/11/14 22:38:53.241 WARN [HighwaterMarkCheckpoint] 
> [kafka-request-handler-6] [kafka] []  No previously checkpointed 
> highwatermark value found for topic firehoseReads partition 5. Returning 0 as 
> the highwatermark
> 2012/11/14 22:38:53.242 INFO [Log] [kafka-request-handler-6] [kafka] []  
> [Kafka Log on Broker 464], Truncated log segment 
> /export/content/kafka/i001_caches/firehoseReads-5/.log to 
> target offset 0
> 2012/11/14 22:38:53.242 INFO [ReplicaFetcherManager] 
> [kafka-request-handler-6] [kafka] []  [ReplicaFetcherManager on broker 464] 
> adding fetcher on topic firehoseReads, partion 5, initOffset 0 to broker 466 
> with fetcherId 0
> 2012/11/14 22:38:53.248 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-466-0-on-broker-464] [kafka] []  
> [ReplicaFetcherThread-466-0-on-broker-464], error for firehoseReads 5 to 
> broker 466
> kafka.common.UnknownTopicOrPartitionException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at java.lang.Class.newInstance0(Class.java:355)
> at java.lang.Class.newInstance(Class.java:308)
> at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:68)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5$$anonfun$apply$3.apply(AbstractFetcherThread.scala:124)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5$$anonfun$apply$3.apply(AbstractFetcherThread.scala:124)
> at kafka.utils.Logging$class.error(Logging.scala:102)
> at kafka.utils.ShutdownableThread.error(ShutdownableThread.scala:23)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5.apply(AbstractFetcherThread.scala:123)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5.apply(AbstractFetcherThread.scala:99)
> at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:125)
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:99)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:50)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2966) 0.9.0 docs missing upgrade notes regarding replica lag

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2966.
--
Resolution: Fixed

> 0.9.0 docs missing upgrade notes regarding replica lag
> --
>
> Key: KAFKA-2966
> URL: https://issues.apache.org/jira/browse/KAFKA-2966
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aditya Auradkar
>
> We should document that:
> * replica.lag.max.messages is gone
> * replica.lag.time.max.ms has a new meaning
> In the upgrade section. People can get caught by surprise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2829) Inconsistent naming in {Producer,Consumer} Callback interfaces

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2829.
--
Resolution: Won't Fix

These are public interfaces heavily used by users. It's not appropriate to 
change now.  Please reopen if you think otherwise.


> Inconsistent naming in {Producer,Consumer} Callback interfaces
> --
>
> Key: KAFKA-2829
> URL: https://issues.apache.org/jira/browse/KAFKA-2829
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Mathias Söderberg
>Assignee: Neha Narkhede
>Priority: Minor
>
> The Callback interface for the "new" producer has a method called 
> "onCompletion" while the OffsetCommitCallback for the new consumer has a 
> method called "onComplete".
> Perhaps they should be using the same naming convention to avoid confusion?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2565) Offset Commit is not working if multiple consumers try to commit the offset

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2565.
--
Resolution: Cannot Reproduce

may be related to deployment issue. Pl reopen if you think the issue still 
exists


> Offset Commit is not working if multiple consumers try to commit the offset
> ---
>
> Key: KAFKA-2565
> URL: https://issues.apache.org/jira/browse/KAFKA-2565
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1, 0.8.2.1, 0.8.2.2
>Reporter: Sreenivasulu Nallapati
>Assignee: Neha Narkhede
>
> We are seeing some strange behaviour with commitOffsets() method of 
> kafka.javaapi.consumer.ConsumerConnector. We committing the offsets to 
> zookeeper at the end of the consumer batch. We are running multiple consumers 
> for the same topic.
> Test details: 
> 1.Created a topic with three partitions
> 2.Started three consumers (cronjob) at the same time. The aim is that 
> each consumer to process one partition.
> 3.Each consumer at the end of the batch, it will call the commitOffsets() 
> method on kafka.javaapi.consumer.ConsumerConnector
> 4.The offsets are getting properly updated in zookeeper if we run the 
> consumers for small set (say 1000 messages) of messages.
> 5.But for larger number of messages, commit offset is not working as 
> expected…sometimes only two offsets are properly committing and other one 
> remains as it was.
> 6.Please see the below example
> Partition: 0 Latest Offset: 1057585
> Partition: 1 Latest Offset: 1057715
> Partition: 2 Latest Offset: 1057590
> Earliest Offset after all consumers completed: {0=1057585, 1=724375, 
> 2=1057590}
> Highlighted in red supposed to be committed as 1057715 but it did not.
> Please check if it is bug with multiple consumers. When multiple consumers 
> are trying to update the same path in Zookeper, is there any synchronization 
> issue?
> Kafka Cluster details
> 1 zookeeper
> 3 brokers



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2560) Fatal error during KafkaServer startup because of Map failed error.

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2560.
--
Resolution: Fixed

This is due to java.lang.OutOfMemoryError.  Pl reopen if you think the issue 
still exists


> Fatal error during KafkaServer startup because of Map failed error.
> ---
>
> Key: KAFKA-2560
> URL: https://issues.apache.org/jira/browse/KAFKA-2560
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2.1
> Environment: Linux 
>Reporter: Bo Wang
>Assignee: Jay Kreps
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I have 3 kafka nodes,  
> create 30 topics ,every topic has 100 pations, and replica factor is 2.
> Kafka server start failed,
> 2015-09-21 10:28:35,668 | INFO  | pool-2-thread-1 | Recovering unflushed 
> segment 0 in log testTopic_14-34. | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> 2015-09-21 10:28:35,942 | ERROR | main | There was an error in one of the 
> threads during logs loading: java.io.IOException: Map failed | 
> kafka.utils.Logging$class.error(Logging.scala:97)
> 2015-09-21 10:28:35,943 | INFO  | pool-2-thread-5 | Recovering unflushed 
> segment 0 in log testTopic_17-23. | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> 2015-09-21 10:28:35,944 | INFO  | pool-2-thread-5 | Completed load of log 
> testTopic_17-23 with log end offset 0 | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> 2015-09-21 10:28:35,945 | FATAL | main | [Kafka Server 54], Fatal error 
> during KafkaServer startup. Prepare to shutdown | 
> kafka.utils.Logging$class.fatal(Logging.scala:116)
> java.io.IOException: Map failed
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at kafka.log.Log.loadSegments(Log.scala:179)
> at kafka.log.Log.(Log.scala:67)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$7$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:142)
> at kafka.utils.Utils$$anon$1.run(Utils.scala:54)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> ... 13 more
> 2015-09-21 10:28:35,946 | INFO  | pool-2-thread-5 | Recovering unflushed 
> segment 0 in log testTopic_25-77. | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> 2015-09-21 10:28:35,949 | INFO  | main | [Kafka Server 54], shutting down | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> Kafka server host's top infomation below:
> top - 17:16:23 up 53 min,  6 users,  load average: 0.42, 0.99, 1.19
> Tasks: 215 total,   1 running, 214 sleeping,   0 stopped,   0 zombie
> Cpu(s):  4.5%us,  2.4%sy,  0.0%ni, 92.9%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem: 40169M total, 6118M used,34050M free,9M buffers
> Swap:0M total,0M used,0M free,  431M cached



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2577) one node of Kafka cluster cannot process produce request

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2577.
--
Resolution: Cannot Reproduce

This might have fixed in latest versions. Pl reopen if you think the issue 
still exists


> one node of Kafka cluster cannot process produce request
> 
>
> Key: KAFKA-2577
> URL: https://issues.apache.org/jira/browse/KAFKA-2577
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , replication
>Affects Versions: 0.8.1.1
>Reporter: Ray
>Assignee: Jun Rao
>
> We had 3 nodes for kafka cluster, suddenly one node cannot accept produce r 
> request, here is the log:
> [2015-09-21 04:56:32,413] WARN [KafkaApi-0] Produce request with correlation 
> id 9178992 from client  on partition [topic_name,3] failed due to Leader not 
> local for partition [topic_name,3] on broker 0 (kafka.server.KafkaApis)
> after restarting that node, it still cannot work and I saw different log:
> [2015-09-21 20:38:16,791] WARN [KafkaApi-0] Produce request with correlation 
> id 9661337 from client  on partition [topic_name,3] failed due to Topic 
> topic_name either doesn't exist or is in the process of being deleted 
> (kafka.server.KafkaApis)
> it got fixed after rolling all the kafka nodes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2451) Exception logged but not managed

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2451.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> Exception logged but not managed
> 
>
> Key: KAFKA-2451
> URL: https://issues.apache.org/jira/browse/KAFKA-2451
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1
> Environment: Windows + Java
>Reporter: Gwenhaël PASQUIERS
>Assignee: Jun Rao
>
> We've been having issues with java-snappy and it's native dll.
> To make it short : we have exceptions when serializing the message.
> We are using kafka producer it in Camel.
> The problem is that kafka thinks that the message was worrectly sent, and 
> returns no error: camel consumes the files even though kafka coult not send 
> the messages.
> Where the issue lies (if i'm correct):
> In DefaultEventHandler line 115 with tag 0.8.1 the exception that is thrown 
> by groupMessageToSet() is catched and logged. The return value of the 
> function dispatchSerializedData() is used to determine if the send was 
> successfull (if (outstandingProduceRequest.size >0) { ...}).
> BUT in this case I'm suspecting that, not even one message could be 
> serialized and added to  "failedProduceRequests". So the code that called 
> "dispatchSerializedData" thinks everything is OK though it's not.
> The producer could behave better and propagate the error properly. Since, it 
> could lead to pure data loss.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2414) Running kafka-producer-perf-test.sh with " --messages 10000000 --message-size 1000 --new-producer" will get WARN Error in I/O.

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2414.
--
Resolution: Cannot Reproduce

 This might have been fixed in latest versions. Pl reopen if you think the 
issue still exists


> Running kafka-producer-perf-test.sh  with " --messages 1000 
> --message-size 1000  --new-producer" will get WARN Error in I/O.
> 
>
> Key: KAFKA-2414
> URL: https://issues.apache.org/jira/browse/KAFKA-2414
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>  Labels: performance
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Running kafka-producer-perf-test.sh  with " --messages 1000 
> --message-size 1000  --new-producer" will get WARN Error in I/O:
> java.io.EOFException
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3885) Kafka new producer cannot failover

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3885.
--
Resolution: Duplicate

> Kafka new producer cannot failover
> --
>
> Key: KAFKA-3885
> URL: https://issues.apache.org/jira/browse/KAFKA-3885
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.2, 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: wateray
>
> This bug can reproduce by the following steps.
> The cluster has 2 brokers.
>  a) start a new producer, then send messages, it works well.
>  b) Then kill one broker,  it works well.
>  c) Then restart the broker,  it works well.
>  d) Then kill the other broker,  the producer can't failover.
> The the producer print log infinity.
> org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) 
> expired due to timeout while requesting metadata from brokers for 
> lwb_test_p50_r2-29
> 
> When producer sends msg, it detected that metadata should update.
> But at this code, class: NetworkClient ,method: leastLoadedNode
> List nodes = this.metadataUpdater.fetchNodes();
> nodes only return one result, and the returned node is the killed node, so 
> the producer cannot failover!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4109) kafka client send msg exception

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4109.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> kafka client send msg exception
> ---
>
> Key: KAFKA-4109
> URL: https://issues.apache.org/jira/browse/KAFKA-4109
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.0.0
> Environment: java8
> kafka cluster
>Reporter: frank
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Batch Expired 
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:56)
>  
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:43)
>  
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
>  
>   at 
> com.longsheng.basicCollect.kafka.KafkaProducer.publishMessage(KafkaProducer.java:44)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.ParentBasic.publish(ParentBasic.java:60)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.ParentBasic.parseIncJson(ParentBasic.java:119)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.coke.price.SecondPrice.collectData(SecondPrice.java:41)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.coke.price.SecondPrice.process(SecondPrice.java:49)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.DzwProcessor.process(DzwProcessor.java:33)
>  
>   at 
> com.longsheng.basicCollect.timer.CollectDzwBigData.execute(CollectDzwBigData.java:14)
>  
>   at org.quartz.core.JobRunShell.run(JobRunShell.java:202) 
>   at 
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) 
> Caused by: org.apache.kafka.common.errors.TimeoutException: Batch Expired
> the exception is arbitrarily!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4350) Can't mirror from Kafka 0.9 to Kafka 0.10.1

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4350.
--
Resolution: Won't Fix

Closing as per comments.

> Can't mirror from Kafka 0.9 to Kafka 0.10.1
> ---
>
> Key: KAFKA-4350
> URL: https://issues.apache.org/jira/browse/KAFKA-4350
> Project: Kafka
>  Issue Type: Bug
>Reporter: Emanuele Cesena
>
> I'm running 2 clusters: K9 with Kafka 0.9 and K10 with Kafka 0.10.1.
> In K10, I've set up mirror maker to clone a topic from K9 to K10.
> Mirror maker immediately fails while starting, any suggestion? Following 
> error message and configs.
> Error message:
> {code:java} 
> [2016-10-26 23:54:01,663] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'cluster_id': Error reading string of length 418, only 43 bytes available
> at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
> at 
> org.apache.kafka.clients.NetworkClient.parseResponse(NetworkClient.java:380)
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:449)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:269)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:248)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.receive(MirrorMaker.scala:582)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:431)
> [2016-10-26 23:54:01,679] FATAL [mirrormaker-thread-0] Mirror maker thread 
> exited abnormally, stopping the whole mirror maker. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> {code} 
> Consumer:
> {code:} 
> group.id=mirrormaker001
> client.id=mirrormaker001
> bootstrap.servers=...K9...
> security.protocol=PLAINTEXT
> auto.offset.reset=earliest
> {code} 
> (note that I first run without client.id, then tried adding a client.id 
> because -- same error in both cases)
> Producer:
> {code:}
> bootstrap.servers=...K10...
> security.protocol=PLAINTEXT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4869) 0.10.2.0 release notes incorrectly include KIP-115

2017-08-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4869.
--
Resolution: Fixed
  Assignee: Manikumar

> 0.10.2.0 release notes incorrectly include KIP-115
> --
>
> Key: KAFKA-4869
> URL: https://issues.apache.org/jira/browse/KAFKA-4869
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.10.2.0
>Reporter: Yeva Byzek
>Assignee: Manikumar
>Priority: Minor
>
> From http://kafka.apache.org/documentation.html :
> bq. The offsets.topic.replication.factor broker config is now enforced upon 
> auto topic creation. Internal auto topic creation will fail with a 
> GROUP_COORDINATOR_NOT_AVAILABLE error until the cluster size meets this 
> replication factor requirement.
> Even though this feature 
> [KIP-115|https://cwiki.apache.org/confluence/display/KAFKA/KIP-115%3A+Enforce+offsets.topic.replication.factor+upon+__consumer_offsets+auto+topic+creation]
>  did not make it into 0.10.2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5788) "IllegalArgumentException: long is not a value type" when running ReassignPartitionsCommand

2017-08-25 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141959#comment-16141959
 ] 

Manikumar commented on KAFKA-5788:
--

May be related to joptsimple library version mismatch.   Can you try with jopt 
library corresponding to the Kafka release?
Kafka 0.10.2.0 uses jopt-simple-4.9.jar and Kafka 0.11.0.0 uses 
jopt-simple-5.0.4.jar. 

> "IllegalArgumentException: long is not a value type" when running 
> ReassignPartitionsCommand
> ---
>
> Key: KAFKA-5788
> URL: https://issues.apache.org/jira/browse/KAFKA-5788
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0
> Environment: Windows 
>Reporter: Ansel Zandegran
>
> *When trying to run ReassignPartitionsCommand with the following statements,*
> String[] reassignCmdArgs = { "--reassignment-json-file=" + 
> Paths.get(reassignmentConfigFileName),
>   "--zookeeper=" + 
> client.getZookeeperClient().getCurrentConnectionString(), "--execute", 
> "--throttle="+1000 };
>   logger.debug("Calling 
> ReassignPartitionsCommand with args:{}", Arrays.toString(reassignCmdArgs));
>   
> ReassignPartitionsCommand.main(reassignCmdArgs);
> *I get the following error*
> 2017-08-22 15:57:28 DEBUG ZookeeperBackedAdoptionLogicImpl:320 - Calling 
> ReassignPartitionsCommand with 
> args:[--reassignment-json-file=partitions-to-move.json.1503417447767, 
> --zookeeper=172.31.14.207:2181, --execute]
> java.lang.IllegalArgumentException: long is not a value type
> at joptsimple.internal.Reflection.findConverter(Reflection.java:66)
> at 
> joptsimple.ArgumentAcceptingOptionSpec.ofType(ArgumentAcceptingOptionSpec.java:111)
> at 
> kafka.admin.ReassignPartitionsCommand$ReassignPartitionsCommandOptions.(ReassignPartitionsCommand.scala:301)
> at 
> kafka.admin.ReassignPartitionsCommand$.validateAndParseArgs(ReassignPartitionsCommand.scala:236)
> at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:34)
> at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> at 
> rebalancer.core.ZookeeperBackedAdoptionLogicImpl.reassignPartitionToLocalBroker(ZookeeperBackedAdoptionLogicImpl.java:321)
> at 
> rebalancer.core.ZookeeperBackedAdoptionLogicImpl.adoptRemotePartition(ZookeeperBackedAdoptionLogicImpl.java:267)
> at 
> rebalancer.core.ZookeeperBackedAdoptionLogicImpl.run(ZookeeperBackedAdoptionLogicImpl.java:118)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


<    2   3   4   5   6   7   8   9   10   11   >