[jira] [Commented] (KAFKA-5752) Delete topic and re-create topic immediate will delete the new topic's timeindex

2017-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134236#comment-16134236
 ] 

ASF GitHub Bot commented on KAFKA-5752:
---

GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/3700

KAFKA-5752: Update timeIndex, txnIndex file pointers to renamed (to be 
deleted) files



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka KAFKA-5752

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3700.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3700


commit 0edf404b67237fc5e3b43bea1c95ab83165498a9
Author: Manikumar Reddy 
Date:   2017-08-19T19:12:40Z

KAFKA-5752: Update timeIndex, txnIndex file pointers to renamed files




> Delete topic and re-create topic immediate will delete the new topic's 
> timeindex 
> -
>
> Key: KAFKA-5752
> URL: https://issues.apache.org/jira/browse/KAFKA-5752
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Pengwei
>Priority: Critical
>  Labels: reliability
> Fix For: 0.11.0.1, 1.0.0
>
>
> When we delete the topic and re-create the topic with the same name, we will 
> find after the 
> async delete topic is finished,  async delete will remove the newly created 
> topic's time index.
> This is because in the LogManager's asyncDelete, it will change the log and 
> index's file pointer to the renamed log and index, but missing the time 
> index. So will cause this issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2289) KafkaProducer logs erroneous warning on startup

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2289.
--
Resolution: Fixed

This has been fixed.

> KafkaProducer logs erroneous warning on startup
> ---
>
> Key: KAFKA-2289
> URL: https://issues.apache.org/jira/browse/KAFKA-2289
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Henning Schmiedehausen
>Priority: Trivial
>
> When creating a new KafkaProducer using the 
> KafkaProducer(KafkaConfig, Serializer, Serializer) constructor, Kafka 
> will list the following lines, which are harmless but are still at WARN level:
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> value.serializer = class  was supplied but isn't a known config.
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> key.serializer = class  was supplied but isn't a known config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2231) Deleting a topic fails

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2231.
--
Resolution: Cannot Reproduce

Topic deletion is more stable in latest releases. Pl reopen if you think the 
issue still exists

> Deleting a topic fails
> --
>
> Key: KAFKA-2231
> URL: https://issues.apache.org/jira/browse/KAFKA-2231
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: Windows 8.1
>Reporter: James G. Haberly
>Priority: Minor
>
> delete.topic.enable=true is in config\server.properties.
> Using --list shows the topic "marked for deletion".
> Stopping and restarting kafka and zookeeper does not delete the topic; it 
> remains "marked for deletion".
> Trying to recreate the topic fails with "Topic XXX already exists".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2296) Not able to delete topic on latest kafka

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2296.
--
Resolution: Duplicate

> Not able to delete topic on latest kafka
> 
>
> Key: KAFKA-2296
> URL: https://issues.apache.org/jira/browse/KAFKA-2296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Andrew M
>
> Was able to reproduce [inability to delete 
> topic|https://issues.apache.org/jira/browse/KAFKA-1397?focusedCommentId=14491442=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14491442]
>  on running cluster with kafka 0.8.2.1.
> Cluster consist from 2 c3.xlarge aws instances with sufficient storage 
> attached. All communication between nodes goes through aws vpc
> Some warns from logs:
> {noformat}[Controller-1234-to-broker-4321-send-thread], Controller 1234 epoch 
> 20 fails to send request 
> Name:UpdateMetadataRequest;Version:0;Controller:1234;ControllerEpoch:20;CorrelationId:24047;ClientId:id_1234-host_1.2.3.4-port_6667;AliveBrokers:id:1234,host:1.2.3.4,port:6667,id:4321,host:4.3.2.1,port:6667;PartitionState:[topic_name,45]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,27]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,17]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,49]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,7]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,26]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,62]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,18]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,36]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,29]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,53]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,52]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,2]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,12]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,33]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,14]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,63]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,30]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,6]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,28]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,38]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,24]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,31]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,4]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,20]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,54]
>  -> 
> 

[jira] [Resolved] (KAFKA-2093) Remove logging error if we throw exception

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2093.
--
Resolution: Won't Fix

Scala producer is deprecated. Pl reopen if you think the issue still exists


> Remove logging error if we throw exception
> --
>
> Key: KAFKA-2093
> URL: https://issues.apache.org/jira/browse/KAFKA-2093
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ivan Balashov
>Priority: Trivial
>
> On failure, kafka producer logs error AND throws exception. This can pose 
> problems, since client application cannot flexibly control if a particular 
> exception should be logged, and logging becomes all-or-nothing choice for 
> particular logger.
> We must remove logging error if we decide to throw exception.
> Some examples of this:
> kafka.client.ClientUtils$:89
> kafka.producer.SyncProducer:103
> If no one has objections, I can search around for other cases of logging + 
> throwing which should also be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2053) Make initZk a protected function

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2053.
--
Resolution: Won't Fix

 Pl reopen if you think the requirement still exists

> Make initZk a protected function
> 
>
> Key: KAFKA-2053
> URL: https://issues.apache.org/jira/browse/KAFKA-2053
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Christian Kampka
>Priority: Minor
> Attachments: make-initzk-protected
>
>
> In our environment, we have established an external procedure to notify 
> clients of changes in the zookeeper cluster configuration, especially 
> appearance and disappearance of nodes. it has also become quite common to run 
> Kafka as an embedded service (especially in tests).
> When doing so, it would makes things easier if it were possible to manipulate 
> the creation of the zookeeper client to supply Kafka with a specialized 
> ZooKeeper client that is adjusted to our needs but of course API compatible 
> with the ZkClient.
> Therefore, I would like to propose to make the initZk method protected so we 
> will be able to simply override it for client creation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1495) Kafka Example SimpleConsumerDemo

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1495.
--
Resolution: Won't Fix

> Kafka Example SimpleConsumerDemo 
> -
>
> Key: KAFKA-1495
> URL: https://issues.apache.org/jira/browse/KAFKA-1495
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
> Environment: Mac OS
>Reporter: darion yaphet
>Assignee: Jun Rao
>
> Offical SimpleConsumerDemo  under 
> kafka-0.8.1.1-src/examples/src/main/java/kafka/examples  running on my 
> machine . I found  under /tmp/kafka-logs has two directory  topic2-0 and 
> topic2-1  and 
> one is empty 
> ➜  kafka-logs  ls -lF  topic2-0  topic2-1
> topic2-0:
> total 21752
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel651109  6 17 18:44 .log
> topic2-1:
> total 20480
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel 0  6 17 17:34 .log 
> Is it a bug  or  something should  config in source code?
> thank you 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3951) kafka.common.KafkaStorageException: I/O exception in append to log

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3951.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> kafka.common.KafkaStorageException: I/O exception in append to log
> --
>
> Key: KAFKA-3951
> URL: https://issues.apache.org/jira/browse/KAFKA-3951
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.1
>Reporter: wanzi.zhao
> Attachments: server-1.properties, server.properties
>
>
> I have two brokers in the same server using two ports,10.45.33.195:9092 and 
> 10.45.33.195:9093.They use two log directory "log.dirs=/tmp/kafka-logs" and 
> "log.dirs=/tmp/kafka-logs-1".When I shutdown my consumer application(java 
> api)  then change a groupId and restart it,my kafka brokers will stop 
> working, this is the stack trace I get
> [2016-07-11 17:02:47,314] INFO [Group Metadata Manager on Broker 0]: Loading 
> offsets and group metadata from [__consumer_offsets,0] 
> (kafka.coordinator.GroupMetadataManager)
> [2016-07-11 17:02:47,955] FATAL [Replica Manager on Broker 0]: Halting due to 
> unrecoverable I/O error while handling produce request:  
> (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-38'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:228)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:429)
> at 
> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:280)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs/__consumer_offsets-38/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3953) start kafka fail

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3953.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> start kafka fail
> 
>
> Key: KAFKA-3953
> URL: https://issues.apache.org/jira/browse/KAFKA-3953
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.2
> Environment: Linux host-172-28-0-3 3.10.0-327.18.2.el7.x86_64 #1 SMP 
> Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: ffh
>
> kafka start fail. error messege:
> [2016-07-12 03:57:32,717] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> [2016-07-12 03:57:33,124] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> config:
> # Generated by Apache Ambari. Tue Jul 12 03:18:02 2016
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> auto.create.topics.enable=true
> auto.leader.rebalance.enable=true
> broker.id=0
> compression.type=producer
> controlled.shutdown.enable=true
> controlled.shutdown.max.retries=3
> controlled.shutdown.retry.backoff.ms=5000
> controller.message.queue.size=10
> controller.socket.timeout.ms=3
> default.replication.factor=1
> delete.topic.enable=false
> external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec
> external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
> fetch.purgatory.purge.interval.requests=1
> kafka.ganglia.metrics.group=kafka
> kafka.ganglia.metrics.host=localhost
> kafka.ganglia.metrics.port=8671
> kafka.ganglia.metrics.reporter.enabled=true
> kafka.metrics.reporters=
> kafka.timeline.metrics.host=
> kafka.timeline.metrics.maxRowCacheSize=1
> kafka.timeline.metrics.port=
> kafka.timeline.metrics.reporter.enabled=true
> kafka.timeline.metrics.reporter.sendInterval=5900
> leader.imbalance.check.interval.seconds=300
> leader.imbalance.per.broker.percentage=10
> listeners=PLAINTEXT://host-172-28-0-3:6667
> log.cleanup.interval.mins=10
> log.dirs=/kafka-logs
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.bytes=-1
> log.retention.hours=168
> log.roll.hours=168
> log.segment.bytes=1073741824
> message.max.bytes=100
> min.insync.replicas=1
> num.io.threads=8
> num.network.threads=3
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> num.replica.fetchers=1
> offset.metadata.max.bytes=4096
> offsets.commit.required.acks=-1
> offsets.commit.timeout.ms=5000
> offsets.load.buffer.size=5242880
> offsets.retention.check.interval.ms=60
> offsets.retention.minutes=8640
> offsets.topic.compression.codec=0
> offsets.topic.num.partitions=50
> offsets.topic.replication.factor=3
> offsets.topic.segment.bytes=104857600
> principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal
> producer.purgatory.purge.interval.requests=1
> queued.max.requests=500
> replica.fetch.max.bytes=1048576
> replica.fetch.min.bytes=1
> replica.fetch.wait.max.ms=500
> 

[jira] [Resolved] (KAFKA-4078) VIP for Kafka doesn't work

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4078.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> VIP for Kafka  doesn't work 
> 
>
> Key: KAFKA-4078
> URL: https://issues.apache.org/jira/browse/KAFKA-4078
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: chao
>
> We create VIP for chao007kfk002.chao007.com, 9092 ,chao007kfk003.chao007.com, 
> 9092 ,chao007kfk001.chao007.com, 9092
> But we found that Kafka client API has some issues ,  client send metadata 
> update will return three brokers ,  so it will create three connections for 
> 001 002 003 
> When we change VIP to  chao008kfk002.chao008.com, 9092 
> ,chao008kfk003.chao008.com, 9092 ,chao008kfk001.chao008.com, 9092
> it still produce data to 007 
> The following is log information  
> sasl.kerberos.ticket.renew.window.factor = 0.8
> bootstrap.servers = [kfk.chao.com:9092]
> client.id = 
> 2016-08-23 07:00:48,451:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:623) - Initialize connection to node -1 for sending 
> metadata request
> 2016-08-23 07:00:48,452:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:487) - Initiating connection to node -1 at 
> kfk.chao.com:9092.
> 2016-08-23 07:00:48,463:DEBUG kafka-producer-network-thread | producer-1 
> (Metrics.java:201) - Added sensor with name node--1.bytes-sent
>   
>   
> 2016-08-23 07:00:48,489:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:619) - Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1},
>  body={topics=[chao_vip]}), isInitiatedByNetworkClient, 
> createdTimeMs=1471935648465, sendTimeMs=0) to node -1
> 2016-08-23 07:00:48,512:DEBUG kafka-producer-network-thread | producer-1 
> (Metadata.java:172) - Updated cluster metadata version 2 to Cluster(nodes = 
> [Node(1, chao007kfk002.chao007.com, 9092), Node(2, chao007kfk003.chao007.com, 
> 9092), Node(0, chao007kfk001.chao007.com, 9092)], partitions = 
> [Partition(topic = chao_vip, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = chao_vip, partition = 3, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = chao_vip, partition = 2, leader = 2, 
> replicas = [2,], isr = [2,], Partition(topic = chao_vip, partition = 1, 
> leader = 1, replicas = [1,], isr = [1,], Partition(topic = chao_vip, 
> partition = 4, leader = 1, replicas = [1,], isr = [1,]])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3327) Warning from kafka mirror maker about ssl properties not valid

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3327.
--
Resolution: Cannot Reproduce

mostly related to config issue.  Pl reopen if you think the issue still exists


> Warning from kafka mirror maker about ssl properties not valid
> --
>
> Key: KAFKA-3327
> URL: https://issues.apache.org/jira/browse/KAFKA-3327
> Project: Kafka
>  Issue Type: Test
>  Components: config
>Affects Versions: 0.9.0.1
> Environment: CentOS release 6.5
>Reporter: Munir Khan
>Priority: Minor
>  Labels: kafka, mirror-maker, ssl
>
> I am trying to run Mirror maker  over SSL. I have configured my broker 
> following the procedure described in this document 
> http://kafka.apache.org/documentation.html#security_overview 
> I get the following warning when I start the mirror maker:
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# bin/kafka-run-class.sh 
> kafka.tools.MirrorMaker --consumer.config 
> config/datapush-consumer-ssl.properties --producer.config 
> config/datapush-producer-ssl.properties --num.streams 2 --whitelist test1&
> [1] 4701
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# [2016-03-03 10:24:35,348] WARN 
> block.on.buffer.full config is deprecated and will be removed soon. Please 
> use max.block.ms (org.apache.kafka.clients.producer.KafkaProducer)
> [2016-03-03 10:24:35,523] WARN The configuration producer.type = sync was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration ssl.keypassword = test1234 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration compression.codec = none was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration serializer.class = 
> kafka.serializer.DefaultEncoder was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,617] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,753] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> However the Mirror maker is able to mirror data . If I remove the 
> configurations related to the warning messages from my producer  mirror maker 
> does not work . So it seems despite the warning shown above the 
> ssl.configuration properties are used somehow. 
> My question is these are those warnings harmless in this context ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5752) Delete topic and re-create topic immediate will delete the new topic's timeindex

2017-08-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5752:
---
Priority: Critical  (was: Major)

> Delete topic and re-create topic immediate will delete the new topic's 
> timeindex 
> -
>
> Key: KAFKA-5752
> URL: https://issues.apache.org/jira/browse/KAFKA-5752
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Pengwei
>Priority: Critical
>  Labels: reliability
> Fix For: 0.11.0.1, 1.0.0
>
>
> When we delete the topic and re-create the topic with the same name, we will 
> find after the 
> async delete topic is finished,  async delete will remove the newly created 
> topic's time index.
> This is because in the LogManager's asyncDelete, it will change the log and 
> index's file pointer to the renamed log and index, but missing the time 
> index. So will cause this issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5752) Delete topic and re-create topic immediate will delete the new topic's timeindex

2017-08-19 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5752:
---
Labels: reliability  (was: )

> Delete topic and re-create topic immediate will delete the new topic's 
> timeindex 
> -
>
> Key: KAFKA-5752
> URL: https://issues.apache.org/jira/browse/KAFKA-5752
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Pengwei
>  Labels: reliability
> Fix For: 0.11.0.1, 1.0.0
>
>
> When we delete the topic and re-create the topic with the same name, we will 
> find after the 
> async delete topic is finished,  async delete will remove the newly created 
> topic's time index.
> This is because in the LogManager's asyncDelete, it will change the log and 
> index's file pointer to the renamed log and index, but missing the time 
> index. So will cause this issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5752) Delete topic and re-create topic immediate will delete the new topic's timeindex

2017-08-19 Thread Pengwei (JIRA)
Pengwei created KAFKA-5752:
--

 Summary: Delete topic and re-create topic immediate will delete 
the new topic's timeindex 
 Key: KAFKA-5752
 URL: https://issues.apache.org/jira/browse/KAFKA-5752
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0, 0.10.2.0
Reporter: Pengwei


When we delete the topic and re-create the topic with the same name, we will 
find after the 
async delete topic is finished,  async delete will remove the newly created 
topic's time index.


This is because in the LogManager's asyncDelete, it will change the log and 
index's file pointer to the renamed log and index, but missing the time index. 
So will cause this issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3322) recurring errors

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3322.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> recurring errors
> 
>
> Key: KAFKA-3322
> URL: https://issues.apache.org/jira/browse/KAFKA-3322
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: kafka0.9.0 and zookeeper 3.4.6
>Reporter: jackie
>
> we're getting hundreds of these errs with kafka 0.8 and topics become 
> unavailable after running for a few days.  it looks like this 
> https://issues.apache.org/jira/browse/KAFKA-1314



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4501) Support Java 9

2017-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134010#comment-16134010
 ] 

ASF GitHub Bot commented on KAFKA-4501:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3647


> Support Java 9
> --
>
> Key: KAFKA-4501
> URL: https://issues.apache.org/jira/browse/KAFKA-4501
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 1.0.0
>
>
> Java 9 is scheduled to be released in July 2017. We should support it.
> The new module system enforces access control and things like `setAccessible` 
> cannot, by default, be used to circumvent access control in other modules. 
> There are command-line flags available to disable the behaviour on a module 
> by module basis.
> Right now, Gradle fails with the latest Java 9 snapshot and Scala 2.12.1 is 
> required if building with Java 9. So we are blocked until the Gradle issues 
> are fixed.
> I set the "Fix version" to 0.10.2.0, but it's likely to happen for the 
> release after that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5744:


Assignee: Colin P. McCabe  (was: Manikumar)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134008#comment-16134008
 ] 

Manikumar commented on KAFKA-5744:
--

ShellTest.testRunProgramWithErrorReturn  is failing on my machine
cc [~cmccabe] 

java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.kafka.common.utils.ShellTest.testRunProgramWithErrorReturn(ShellTest.java:70)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5744:


Assignee: Manikumar  (was: Colin P. McCabe)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5737) KafkaAdminClient thread should be daemon

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5737.
--
   Resolution: Fixed
Fix Version/s: 1.0.0
   0.11.0.1

> KafkaAdminClient thread should be daemon
> 
>
> Key: KAFKA-5737
> URL: https://issues.apache.org/jira/browse/KAFKA-5737
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1, 1.0.0
>
>
> The admin client thread should be daemon, for consistency with the consumer 
> and producer threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)