[jira] [Commented] (KAFKA-5751) Kafka cannot start; corrupted index file(s)

2017-08-18 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16133441#comment-16133441
 ] 

Manikumar commented on KAFKA-5751:
--

Duplicate of KAFKA-5747

> Kafka cannot start; corrupted index file(s)
> ---
>
> Key: KAFKA-5751
> URL: https://issues.apache.org/jira/browse/KAFKA-5751
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.11.0.0
> Environment: Linux (RedHat 7)
>Reporter: Martin M
>Priority: Critical
>
> A system was running Kafka 0.11.0 and some applications that produce and 
> consume events.
> During the runtime, a power outage was experienced. Upon restart, Kafka did 
> not recover.
> Logs show repeatedly the messages below:
> *server.log*
> {noformat}
> [2017-08-15 15:02:26,374] FATAL [Kafka Server 1001], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'version': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
>   at 
> kafka.log.ProducerStateManager$.readSnapshot(ProducerStateManager.scala:289)
>   at 
> kafka.log.ProducerStateManager.loadFromSnapshot(ProducerStateManager.scala:440)
>   at 
> kafka.log.ProducerStateManager.truncateAndReload(ProducerStateManager.scala:499)
>   at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:327)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:314)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:272)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>   at kafka.log.Log.loadSegmentFiles(Log.scala:272)
>   at kafka.log.Log.loadSegments(Log.scala:376)
>   at kafka.log.Log.(Log.scala:179)
>   at kafka.log.Log$.apply(Log.scala:1580)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> *kafkaServer.out*
> {noformat}
> [2017-08-15 16:03:50,927] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index)
>  has non-zero size but the last offset is 0 which is no larger than the base 
> offset 0.}. deleting 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.timeindex,
>  
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index,
>  and 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.txnindex
>  and rebuilding index... (kafka.log.Log)
> [2017-08-15 16:03:50,931] INFO [Kafka Server 1001], shutting down 
> (kafka.server.KafkaServer)
> [2017-08-15 16:03:50,932] INFO Recovering unflushed segment 0 in log 
> session-manager.revoke_token_topic-7. (kafka.log.Log)
> [2017-08-15 16:03:50,935] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2017-08-15 16:03:50,936] INFO Loading producer state from offset 0 for 
> partition session-manager.revoke_token_topic-7 with message format version 2 
> (kafka.log.Log)
> [2017-08-15 16:03:50,937] INFO Completed load of log 
> session-manager.revoke_token_topic-7 with 1 log segments, log start offset 0 
> and log end offset 0 in 10 ms (kafka.log.Log)
> [2017-08-15 16:03:50,938] INFO Session: 0x1000f772d26063b closed 
> (org.apache.zookeeper.ZooKeeper)
> [2017-08-15 16:03:50,938] INFO EventThread shut down for session: 
> 0x1000f772d26063b (org.apache.zookeeper.ClientCnxn)
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5750) Elevate log messages for denials to WARN in SimpleAclAuthorizer class

2017-08-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5750:


Assignee: Manikumar

> Elevate log messages for denials to WARN in SimpleAclAuthorizer class
> -
>
> Key: KAFKA-5750
> URL: https://issues.apache.org/jira/browse/KAFKA-5750
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Phillip Walker
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> Currently, the authorizer logs all messages at DEBUG level and logs every 
> single authorization attempt, which can greatly decrease cluster performance, 
> especially when Mirrormaker also produces to that cluster. Many InfoSec 
> requirements, though, require that authorization denials be logged. The 
> proposed solution is to elevate any denial in SimpleAclAuthorizer and any 
> other relevant class to WARN while leaving approvals at their currently 
> logging levels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5737) KafkaAdminClient thread should be daemon

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5737.
--
   Resolution: Fixed
Fix Version/s: 1.0.0
   0.11.0.1

> KafkaAdminClient thread should be daemon
> 
>
> Key: KAFKA-5737
> URL: https://issues.apache.org/jira/browse/KAFKA-5737
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1, 1.0.0
>
>
> The admin client thread should be daemon, for consistency with the consumer 
> and producer threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5744:


Assignee: Manikumar  (was: Colin P. McCabe)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134008#comment-16134008
 ] 

Manikumar commented on KAFKA-5744:
--

ShellTest.testRunProgramWithErrorReturn  is failing on my machine
cc [~cmccabe] 

java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.kafka.common.utils.ShellTest.testRunProgramWithErrorReturn(ShellTest.java:70)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5744:


Assignee: Colin P. McCabe  (was: Manikumar)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3322) recurring errors

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3322.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> recurring errors
> 
>
> Key: KAFKA-3322
> URL: https://issues.apache.org/jira/browse/KAFKA-3322
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: kafka0.9.0 and zookeeper 3.4.6
>Reporter: jackie
>
> we're getting hundreds of these errs with kafka 0.8 and topics become 
> unavailable after running for a few days.  it looks like this 
> https://issues.apache.org/jira/browse/KAFKA-1314



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3327) Warning from kafka mirror maker about ssl properties not valid

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3327.
--
Resolution: Cannot Reproduce

mostly related to config issue.  Pl reopen if you think the issue still exists


> Warning from kafka mirror maker about ssl properties not valid
> --
>
> Key: KAFKA-3327
> URL: https://issues.apache.org/jira/browse/KAFKA-3327
> Project: Kafka
>  Issue Type: Test
>  Components: config
>Affects Versions: 0.9.0.1
> Environment: CentOS release 6.5
>Reporter: Munir Khan
>Priority: Minor
>  Labels: kafka, mirror-maker, ssl
>
> I am trying to run Mirror maker  over SSL. I have configured my broker 
> following the procedure described in this document 
> http://kafka.apache.org/documentation.html#security_overview 
> I get the following warning when I start the mirror maker:
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# bin/kafka-run-class.sh 
> kafka.tools.MirrorMaker --consumer.config 
> config/datapush-consumer-ssl.properties --producer.config 
> config/datapush-producer-ssl.properties --num.streams 2 --whitelist test1&
> [1] 4701
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# [2016-03-03 10:24:35,348] WARN 
> block.on.buffer.full config is deprecated and will be removed soon. Please 
> use max.block.ms (org.apache.kafka.clients.producer.KafkaProducer)
> [2016-03-03 10:24:35,523] WARN The configuration producer.type = sync was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration ssl.keypassword = test1234 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration compression.codec = none was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration serializer.class = 
> kafka.serializer.DefaultEncoder was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,617] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,753] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> However the Mirror maker is able to mirror data . If I remove the 
> configurations related to the warning messages from my producer  mirror maker 
> does not work . So it seems despite the warning shown above the 
> ssl.configuration properties are used somehow. 
> My question is these are those warnings harmless in this context ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3413) Load Error Message should be a Warning

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3413.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> Load Error Message should be a Warning
> --
>
> Key: KAFKA-3413
> URL: https://issues.apache.org/jira/browse/KAFKA-3413
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, offset manager
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Scott Reynolds
>Assignee: Neha Narkhede
>
> There is a Error message from AbstractReplicaFetcherThread that isn't really 
> an error.
> Each implementation
> of this thread can logs out when an error or fatal error occurs.
> ReplicaFetcherThread, has both warn, error and fatal in the
> handleOffsetOutOfRange method.
> ConsumerFetcherThread seems to reset itself without logging out an error.
> Seems that the Reset message isn't shouldn't be an error level as it
> doesn't indicate any real error.
> This patch makes it a warning: 
> https://github.com/apache/kafka/compare/trunk...SupermanScott:offset-reset-to-warn?diff=split&name=offset-reset-to-warn



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4078) VIP for Kafka doesn't work

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4078.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> VIP for Kafka  doesn't work 
> 
>
> Key: KAFKA-4078
> URL: https://issues.apache.org/jira/browse/KAFKA-4078
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: chao
>
> We create VIP for chao007kfk002.chao007.com, 9092 ,chao007kfk003.chao007.com, 
> 9092 ,chao007kfk001.chao007.com, 9092
> But we found that Kafka client API has some issues ,  client send metadata 
> update will return three brokers ,  so it will create three connections for 
> 001 002 003 
> When we change VIP to  chao008kfk002.chao008.com, 9092 
> ,chao008kfk003.chao008.com, 9092 ,chao008kfk001.chao008.com, 9092
> it still produce data to 007 
> The following is log information  
> sasl.kerberos.ticket.renew.window.factor = 0.8
> bootstrap.servers = [kfk.chao.com:9092]
> client.id = 
> 2016-08-23 07:00:48,451:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:623) - Initialize connection to node -1 for sending 
> metadata request
> 2016-08-23 07:00:48,452:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:487) - Initiating connection to node -1 at 
> kfk.chao.com:9092.
> 2016-08-23 07:00:48,463:DEBUG kafka-producer-network-thread | producer-1 
> (Metrics.java:201) - Added sensor with name node--1.bytes-sent
>   
>   
> 2016-08-23 07:00:48,489:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:619) - Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1},
>  body={topics=[chao_vip]}), isInitiatedByNetworkClient, 
> createdTimeMs=1471935648465, sendTimeMs=0) to node -1
> 2016-08-23 07:00:48,512:DEBUG kafka-producer-network-thread | producer-1 
> (Metadata.java:172) - Updated cluster metadata version 2 to Cluster(nodes = 
> [Node(1, chao007kfk002.chao007.com, 9092), Node(2, chao007kfk003.chao007.com, 
> 9092), Node(0, chao007kfk001.chao007.com, 9092)], partitions = 
> [Partition(topic = chao_vip, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = chao_vip, partition = 3, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = chao_vip, partition = 2, leader = 2, 
> replicas = [2,], isr = [2,], Partition(topic = chao_vip, partition = 1, 
> leader = 1, replicas = [1,], isr = [1,], Partition(topic = chao_vip, 
> partition = 4, leader = 1, replicas = [1,], isr = [1,]])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3953) start kafka fail

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3953.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> start kafka fail
> 
>
> Key: KAFKA-3953
> URL: https://issues.apache.org/jira/browse/KAFKA-3953
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.2
> Environment: Linux host-172-28-0-3 3.10.0-327.18.2.el7.x86_64 #1 SMP 
> Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: ffh
>
> kafka start fail. error messege:
> [2016-07-12 03:57:32,717] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> [2016-07-12 03:57:33,124] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> config:
> # Generated by Apache Ambari. Tue Jul 12 03:18:02 2016
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> auto.create.topics.enable=true
> auto.leader.rebalance.enable=true
> broker.id=0
> compression.type=producer
> controlled.shutdown.enable=true
> controlled.shutdown.max.retries=3
> controlled.shutdown.retry.backoff.ms=5000
> controller.message.queue.size=10
> controller.socket.timeout.ms=3
> default.replication.factor=1
> delete.topic.enable=false
> external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec
> external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
> fetch.purgatory.purge.interval.requests=1
> kafka.ganglia.metrics.group=kafka
> kafka.ganglia.metrics.host=localhost
> kafka.ganglia.metrics.port=8671
> kafka.ganglia.metrics.reporter.enabled=true
> kafka.metrics.reporters=
> kafka.timeline.metrics.host=
> kafka.timeline.metrics.maxRowCacheSize=1
> kafka.timeline.metrics.port=
> kafka.timeline.metrics.reporter.enabled=true
> kafka.timeline.metrics.reporter.sendInterval=5900
> leader.imbalance.check.interval.seconds=300
> leader.imbalance.per.broker.percentage=10
> listeners=PLAINTEXT://host-172-28-0-3:6667
> log.cleanup.interval.mins=10
> log.dirs=/kafka-logs
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.bytes=-1
> log.retention.hours=168
> log.roll.hours=168
> log.segment.bytes=1073741824
> message.max.bytes=100
> min.insync.replicas=1
> num.io.threads=8
> num.network.threads=3
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> num.replica.fetchers=1
> offset.metadata.max.bytes=4096
> offsets.commit.required.acks=-1
> offsets.commit.timeout.ms=5000
> offsets.load.buffer.size=5242880
> offsets.retention.check.interval.ms=60
> offsets.retention.minutes=8640
> offsets.topic.compression.codec=0
> offsets.topic.num.partitions=50
> offsets.topic.replication.factor=3
> offsets.topic.segment.bytes=104857600
> principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal
> producer.purgatory.purge.interval.requests=1
> queued.max.requests=500
> replica.fetch.max.bytes=1048576
> replica.fetch.min.bytes=1
> replica.fetch.wait.max.ms=500
> replica.high.watermark.c

[jira] [Resolved] (KAFKA-3951) kafka.common.KafkaStorageException: I/O exception in append to log

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3951.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> kafka.common.KafkaStorageException: I/O exception in append to log
> --
>
> Key: KAFKA-3951
> URL: https://issues.apache.org/jira/browse/KAFKA-3951
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.1
>Reporter: wanzi.zhao
> Attachments: server-1.properties, server.properties
>
>
> I have two brokers in the same server using two ports,10.45.33.195:9092 and 
> 10.45.33.195:9093.They use two log directory "log.dirs=/tmp/kafka-logs" and 
> "log.dirs=/tmp/kafka-logs-1".When I shutdown my consumer application(java 
> api)  then change a groupId and restart it,my kafka brokers will stop 
> working, this is the stack trace I get
> [2016-07-11 17:02:47,314] INFO [Group Metadata Manager on Broker 0]: Loading 
> offsets and group metadata from [__consumer_offsets,0] 
> (kafka.coordinator.GroupMetadataManager)
> [2016-07-11 17:02:47,955] FATAL [Replica Manager on Broker 0]: Halting due to 
> unrecoverable I/O error while handling produce request:  
> (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-38'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:228)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:429)
> at 
> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:280)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs/__consumer_offsets-38/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1495) Kafka Example SimpleConsumerDemo

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1495.
--
Resolution: Won't Fix

> Kafka Example SimpleConsumerDemo 
> -
>
> Key: KAFKA-1495
> URL: https://issues.apache.org/jira/browse/KAFKA-1495
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
> Environment: Mac OS
>Reporter: darion yaphet
>Assignee: Jun Rao
>
> Offical SimpleConsumerDemo  under 
> kafka-0.8.1.1-src/examples/src/main/java/kafka/examples  running on my 
> machine . I found  under /tmp/kafka-logs has two directory  topic2-0 and 
> topic2-1  and 
> one is empty 
> ➜  kafka-logs  ls -lF  topic2-0  topic2-1
> topic2-0:
> total 21752
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel651109  6 17 18:44 .log
> topic2-1:
> total 20480
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel 0  6 17 17:34 .log 
> Is it a bug  or  something should  config in source code?
> thank you 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2053) Make initZk a protected function

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2053.
--
Resolution: Won't Fix

 Pl reopen if you think the requirement still exists

> Make initZk a protected function
> 
>
> Key: KAFKA-2053
> URL: https://issues.apache.org/jira/browse/KAFKA-2053
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Christian Kampka
>Priority: Minor
> Attachments: make-initzk-protected
>
>
> In our environment, we have established an external procedure to notify 
> clients of changes in the zookeeper cluster configuration, especially 
> appearance and disappearance of nodes. it has also become quite common to run 
> Kafka as an embedded service (especially in tests).
> When doing so, it would makes things easier if it were possible to manipulate 
> the creation of the zookeeper client to supply Kafka with a specialized 
> ZooKeeper client that is adjusted to our needs but of course API compatible 
> with the ZkClient.
> Therefore, I would like to propose to make the initZk method protected so we 
> will be able to simply override it for client creation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2093) Remove logging error if we throw exception

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2093.
--
Resolution: Won't Fix

Scala producer is deprecated. Pl reopen if you think the issue still exists


> Remove logging error if we throw exception
> --
>
> Key: KAFKA-2093
> URL: https://issues.apache.org/jira/browse/KAFKA-2093
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ivan Balashov
>Priority: Trivial
>
> On failure, kafka producer logs error AND throws exception. This can pose 
> problems, since client application cannot flexibly control if a particular 
> exception should be logged, and logging becomes all-or-nothing choice for 
> particular logger.
> We must remove logging error if we decide to throw exception.
> Some examples of this:
> kafka.client.ClientUtils$:89
> kafka.producer.SyncProducer:103
> If no one has objections, I can search around for other cases of logging + 
> throwing which should also be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2296) Not able to delete topic on latest kafka

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2296.
--
Resolution: Duplicate

> Not able to delete topic on latest kafka
> 
>
> Key: KAFKA-2296
> URL: https://issues.apache.org/jira/browse/KAFKA-2296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Andrew M
>
> Was able to reproduce [inability to delete 
> topic|https://issues.apache.org/jira/browse/KAFKA-1397?focusedCommentId=14491442&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14491442]
>  on running cluster with kafka 0.8.2.1.
> Cluster consist from 2 c3.xlarge aws instances with sufficient storage 
> attached. All communication between nodes goes through aws vpc
> Some warns from logs:
> {noformat}[Controller-1234-to-broker-4321-send-thread], Controller 1234 epoch 
> 20 fails to send request 
> Name:UpdateMetadataRequest;Version:0;Controller:1234;ControllerEpoch:20;CorrelationId:24047;ClientId:id_1234-host_1.2.3.4-port_6667;AliveBrokers:id:1234,host:1.2.3.4,port:6667,id:4321,host:4.3.2.1,port:6667;PartitionState:[topic_name,45]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,27]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,17]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,49]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,7]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,26]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,62]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,18]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,36]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,29]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,53]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,52]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,2]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,12]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,33]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,14]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,63]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,30]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,6]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,28]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,38]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,24]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,31]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,4]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,20]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,54]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,432

[jira] [Resolved] (KAFKA-2231) Deleting a topic fails

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2231.
--
Resolution: Cannot Reproduce

Topic deletion is more stable in latest releases. Pl reopen if you think the 
issue still exists

> Deleting a topic fails
> --
>
> Key: KAFKA-2231
> URL: https://issues.apache.org/jira/browse/KAFKA-2231
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: Windows 8.1
>Reporter: James G. Haberly
>Priority: Minor
>
> delete.topic.enable=true is in config\server.properties.
> Using --list shows the topic "marked for deletion".
> Stopping and restarting kafka and zookeeper does not delete the topic; it 
> remains "marked for deletion".
> Trying to recreate the topic fails with "Topic XXX already exists".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2289) KafkaProducer logs erroneous warning on startup

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2289.
--
Resolution: Fixed

This has been fixed.

> KafkaProducer logs erroneous warning on startup
> ---
>
> Key: KAFKA-2289
> URL: https://issues.apache.org/jira/browse/KAFKA-2289
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Henning Schmiedehausen
>Priority: Trivial
>
> When creating a new KafkaProducer using the 
> KafkaProducer(KafkaConfig, Serializer, Serializer) constructor, Kafka 
> will list the following lines, which are harmless but are still at WARN level:
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> value.serializer = class  was supplied but isn't a known config.
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> key.serializer = class  was supplied but isn't a known config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5714:


Assignee: (was: Manikumar)

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5751) Kafka cannot start; corrupted index file(s)

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5751.
--
Resolution: Duplicate

> Kafka cannot start; corrupted index file(s)
> ---
>
> Key: KAFKA-5751
> URL: https://issues.apache.org/jira/browse/KAFKA-5751
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.11.0.0
> Environment: Linux (RedHat 7)
>Reporter: Martin M
>Priority: Critical
>
> A system was running Kafka 0.11.0 and some applications that produce and 
> consume events.
> During the runtime, a power outage was experienced. Upon restart, Kafka did 
> not recover.
> Logs show repeatedly the messages below:
> *server.log*
> {noformat}
> [2017-08-15 15:02:26,374] FATAL [Kafka Server 1001], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'version': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
>   at 
> kafka.log.ProducerStateManager$.readSnapshot(ProducerStateManager.scala:289)
>   at 
> kafka.log.ProducerStateManager.loadFromSnapshot(ProducerStateManager.scala:440)
>   at 
> kafka.log.ProducerStateManager.truncateAndReload(ProducerStateManager.scala:499)
>   at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:327)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:314)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:272)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>   at kafka.log.Log.loadSegmentFiles(Log.scala:272)
>   at kafka.log.Log.loadSegments(Log.scala:376)
>   at kafka.log.Log.(Log.scala:179)
>   at kafka.log.Log$.apply(Log.scala:1580)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> *kafkaServer.out*
> {noformat}
> [2017-08-15 16:03:50,927] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index)
>  has non-zero size but the last offset is 0 which is no larger than the base 
> offset 0.}. deleting 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.timeindex,
>  
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index,
>  and 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.txnindex
>  and rebuilding index... (kafka.log.Log)
> [2017-08-15 16:03:50,931] INFO [Kafka Server 1001], shutting down 
> (kafka.server.KafkaServer)
> [2017-08-15 16:03:50,932] INFO Recovering unflushed segment 0 in log 
> session-manager.revoke_token_topic-7. (kafka.log.Log)
> [2017-08-15 16:03:50,935] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2017-08-15 16:03:50,936] INFO Loading producer state from offset 0 for 
> partition session-manager.revoke_token_topic-7 with message format version 2 
> (kafka.log.Log)
> [2017-08-15 16:03:50,937] INFO Completed load of log 
> session-manager.revoke_token_topic-7 with 1 log segments, log start offset 0 
> and log end offset 0 in 10 ms (kafka.log.Log)
> [2017-08-15 16:03:50,938] INFO Session: 0x1000f772d26063b closed 
> (org.apache.zookeeper.ZooKeeper)
> [2017-08-15 16:03:50,938] INFO EventThread shut down for session: 
> 0x1000f772d26063b (org.apache.zookeeper.ClientCnxn)
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5686) Documentation inconsistency on the "Compression"

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5686:


Assignee: Manikumar

> Documentation inconsistency on the "Compression"
> 
>
> Key: KAFKA-5686
> URL: https://issues.apache.org/jira/browse/KAFKA-5686
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.11.0.0
>Reporter: Seweryn Habdank-Wojewodzki
>Assignee: Manikumar
>Priority: Minor
>
> At the page:
> https://kafka.apache.org/documentation/
> There is a sentence:
> {{Kafka supports GZIP, Snappy and LZ4 compression protocols. More details on 
> compression can be found here.}}
> Especially link under the word *here* is describing very old compression 
> settings, which is false in case of version 0.11.x.y.
> JAVA API:
> Also it would be nice to clearly state if *compression.type* uses only case 
> sensitive String as a value or if it is recommended to use e.g. 
> {{CompressionType.GZIP.name}} for JAVA API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4856) Calling KafkaProducer.close() from multiple threads may cause spurious error

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-4856:


Assignee: Manikumar

> Calling KafkaProducer.close() from multiple threads may cause spurious error
> 
>
> Key: KAFKA-4856
> URL: https://issues.apache.org/jira/browse/KAFKA-4856
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0, 0.10.0.0, 0.10.2.0
>Reporter: Xavier Léauté
>Assignee: Manikumar
>Priority: Minor
>
> Calling {{KafkaProducer.close()}} from multiple threads simultaneously may 
> cause the following harmless error message to be logged. There appears to be 
> a race-condition in {{AppInfoParser.unregisterAppInfo}} that we don't guard 
> against. 
> {noformat}
> WARN Error unregistering AppInfo mbean 
> (org.apache.kafka.common.utils.AppInfoParser:71)
> javax.management.InstanceNotFoundException: 
> kafka.producer:type=app-info,id=
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
> at 
> org.apache.kafka.common.utils.AppInfoParser.unregisterAppInfo(AppInfoParser.java:69)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:735)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:686)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:665)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-2254) The shell script should be optimized , even kafka-run-class.sh has a syntax error.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-2254:


Assignee: Manikumar

> The shell script should be optimized , even kafka-run-class.sh has a syntax 
> error.
> --
>
> Key: KAFKA-2254
> URL: https://issues.apache.org/jira/browse/KAFKA-2254
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2.1
> Environment: linux
>Reporter: Bo Wang
>Assignee: Manikumar
>  Labels: client-script, kafka-run-class.sh, shell-script
> Attachments: kafka-shell-script.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  kafka-run-class.sh 128 line has a syntax error(missing a space):
> 127-loggc)
> 128 if [ -z "$KAFKA_GC_LOG_OPTS"] ; then
> 129GC_LOG_ENABLED="true"
> 130 fi
> And use the ShellCheck to check the shell scripts, the results shows some 
> errors 、 warnings and notes:
> https://github.com/koalaman/shellcheck/wiki/SC2068
> https://github.com/koalaman/shellcheck/wiki/Sc2046
> https://github.com/koalaman/shellcheck/wiki/Sc2086



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5401) Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5401.
--
Resolution: Duplicate

> Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe
> -
>
> Key: KAFKA-5401
> URL: https://issues.apache.org/jira/browse/KAFKA-5401
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.1
> Environment: SLES 11 , Kakaf Over TLS 
>Reporter: PaVan
>  Labels: security
>
> SLES 11 
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
>  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>  at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:194)
>  at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:148)
>  at 
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
>  at org.apache.kafka.common.network.Selector.close(Selector.java:442)
>  at org.apache.kafka.common.network.Selector.poll(Selector.java:310)
>  at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>  at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5547) Return topic authorization failed if no topic describe access

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5547:


Assignee: Manikumar

> Return topic authorization failed if no topic describe access
> -
>
> Key: KAFKA-5547
> URL: https://issues.apache.org/jira/browse/KAFKA-5547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>  Labels: security, usability
> Fix For: 1.0.0
>
>
> We previously made a change to several of the request APIs to return 
> UNKNOWN_TOPIC_OR_PARTITION if the principal does not have Describe access to 
> the topic. The thought was to avoid leaking information about which topics 
> exist. The problem with this is that a client which sees this error will just 
> keep retrying because it is usually treated as retriable. It seems, however, 
> that we could return TOPIC_AUTHORIZATION_FAILED instead and still avoid 
> leaking information as long as we ensure that the Describe authorization 
> check comes before the topic existence check. This would avoid the ambiguity 
> on the client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4840) There are are still cases where producer buffer pool will not remove waiters.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-4840:


   Resolution: Fixed
 Assignee: Sean McCauliff
Fix Version/s: 0.11.0.0

> There are are still cases where producer buffer pool will not remove waiters.
> -
>
> Key: KAFKA-4840
> URL: https://issues.apache.org/jira/browse/KAFKA-4840
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Sean McCauliff
>Assignee: Sean McCauliff
> Fix For: 0.11.0.0
>
>
> There are several problems dealing with errors in  BufferPool.allocate(int 
> size, long maxTimeToBlockMs):
> * The accumulated number of bytes are not put back into the available pool 
> when an exception happens and a thread is waiting for bytes to become 
> available.  This will cause the capacity of the buffer pool to decrease over 
> time any time a timeout is hit within this method.
> * If a Throwable other than InterruptedException is thrown out of await() for 
> some reason or if there is an exception thrown in the corresponding finally 
> block around the await(), for example if waitTime.record(.) throws an 
> exception, then the waiters are not removed from the waiters deque.
> * On timeout or other exception waiters could be signaled, but are not.  If 
> no other buffers are freed then the next waiting thread will also timeout and 
> so on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5239) Producer buffer pool allocates memory inside a lock.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5239.
--
   Resolution: Fixed
 Assignee: Sean McCauliff
Fix Version/s: 1.0.0

> Producer buffer pool allocates memory inside a lock.
> 
>
> Key: KAFKA-5239
> URL: https://issues.apache.org/jira/browse/KAFKA-5239
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Sean McCauliff
>Assignee: Sean McCauliff
> Fix For: 1.0.0
>
>
> KAFKA-4840 placed the ByteBuffer allocation inside the critical section.  
> Previously byte buffer allocation happened outside of the critical section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5239) Producer buffer pool allocates memory inside a lock.

2017-08-22 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16137163#comment-16137163
 ] 

Manikumar commented on KAFKA-5239:
--

[~ijuma]  Should this be in 0.11.0.1?  KAFKA-4840 was released in 0.11.0.0

> Producer buffer pool allocates memory inside a lock.
> 
>
> Key: KAFKA-5239
> URL: https://issues.apache.org/jira/browse/KAFKA-5239
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Sean McCauliff
>Assignee: Sean McCauliff
> Fix For: 1.0.0
>
>
> KAFKA-4840 placed the ByteBuffer allocation inside the critical section.  
> Previously byte buffer allocation happened outside of the critical section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4823) Creating Kafka Producer on application running on Java older version

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4823.
--
Resolution: Won't Fix

Kafka Dropped support for Java 1.6 from 0.9 release. You can try Rest 
Proxy/Other language libraries.  Please reopen if you think otherwise

> Creating Kafka Producer on application running on Java older version
> 
>
> Key: KAFKA-4823
> URL: https://issues.apache.org/jira/browse/KAFKA-4823
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
> Environment: Application running on Java 1.6
>Reporter: live2code
>
> I have an application running on Java 1.6 which cannot be upgraded.This 
> application need to have interfaces to post (producer )messages to Kafka 
> server remote box.Also receive messages as consumer .The code runs fine from 
> my local env which is on java 1.7.But the same producer and consumer fails 
> when executed within the application with the error
> Caused by: java.lang.UnsupportedClassVersionError: JVMCFRE003 bad major 
> version; class=org/apache/kafka/clients/producer/KafkaProducer, offset=6
> Is there someway I can still do it ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3927) kafka broker config docs issue

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3927.
--
Resolution: Later

Yes, These changes are done in KAFKA-615.  Please reopen if the issue still 
exists. 


> kafka broker config docs issue
> --
>
> Key: KAFKA-3927
> URL: https://issues.apache.org/jira/browse/KAFKA-3927
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.10.0.0
>Reporter: Shawn Guo
>Priority: Minor
>
> https://kafka.apache.org/documentation.html#brokerconfigs
> log.flush.interval.messages 
> default value is "9223372036854775807"
> log.flush.interval.ms 
> default value is null
> log.flush.scheduler.interval.ms 
> default value is "9223372036854775807"
> etc. obviously these default values are incorrect. how these doc get 
> generated ? it looks confusing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3800) java client can`t poll msg

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3800.
--
Resolution: Cannot Reproduce

 Please reopen if the issue still exists. 


> java client can`t poll msg
> --
>
> Key: KAFKA-3800
> URL: https://issues.apache.org/jira/browse/KAFKA-3800
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: java8,win7 64
>Reporter: frank
>Assignee: Neha Narkhede
>
> i use hump topic name, after poll msg is null.eg: Test_4 why?
> all low char is ok. i`m try nodejs,kafka-console-consumers.bat is ok



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3653) expose the queue size in ControllerChannelManager

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3653.
--
Resolution: Fixed

Fixed in KAFKA-5135/KIP-143

> expose the queue size in ControllerChannelManager
> -
>
> Key: KAFKA-3653
> URL: https://issues.apache.org/jira/browse/KAFKA-3653
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Gwen Shapira
>
> Currently, ControllerChannelManager maintains a queue per broker. If the 
> queue fills up, metadata propagation to the broker is delayed. It would be 
> useful to expose a metric on the size on the queue for monitoring.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3473) Add controller channel manager request queue time metric.

2017-08-22 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16137281#comment-16137281
 ] 

Manikumar commented on KAFKA-3473:
--

@ijuma Is this covered in KAFKA-5135/KIP-143?

> Add controller channel manager request queue time metric.
> -
>
> Key: KAFKA-3473
> URL: https://issues.apache.org/jira/browse/KAFKA-3473
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Dong Lin
>
> Currently controller appends the requests to brokers into controller channel 
> manager queue during state transition. i.e. the state transition are 
> propagated asynchronously. We need to track the request queue time on the 
> controller side to see how long the state propagation is delayed after the 
> state transition finished on the controller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3139) JMX metric ProducerRequestPurgatory doesn't exist, docs seem wrong.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3139.
--
Resolution: Fixed

Fixed in KAFKA-4252

> JMX metric ProducerRequestPurgatory doesn't exist, docs seem wrong.
> ---
>
> Key: KAFKA-3139
> URL: https://issues.apache.org/jira/browse/KAFKA-3139
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> The docs say that there is a JMX metric 
> {noformat}
> kafka.server:type=ProducerRequestPurgatory,name=PurgatorySize
> {noformat}
> But that doesn't seem to work. Using jconsole to inspect our kafka broker, it 
> seems like the right metric is
> {noformat}
> kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce
> {noformat}
> And there are also variants of the above for Fetch, Heartbeat, and Rebalance.
> Is the fix to simply update the docs? From what I can see, the docs for this 
> don't seem auto-generated from code. If it is a simple doc fix, I would like 
> to take this JIRA.
> Btw, what is NumDelayedOperations, and how is it different from PurgatorySize?
> {noformat}
> kafka.server:type=DelayedOperationPurgatory,name=NumDelayedOperations,delayedOperation=Produce
> {noformat}
> And should I also update the docs for that?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2017-08-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138257#comment-16138257
 ] 

Manikumar commented on KAFKA-5590:
--

Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need boker Pricipal credentials to make changes to ZK. 


> Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin
> ---
>
> Key: KAFKA-5590
> URL: https://issues.apache.org/jira/browse/KAFKA-5590
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
> Environment: kafka and ranger under ambari
>Reporter: Chaofeng Zhao
>
> Hi:
> Recently I develop some applications about kafka under ranger. But when I 
> set enable ranger kafka plugin I can not delete kafka topic completely even 
> though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
> plugin it must be authrized. How can I delete kafka topic completely under 
> ranger. Thank you.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2017-08-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138257#comment-16138257
 ] 

Manikumar edited comment on KAFKA-5590 at 8/23/17 11:56 AM:


Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need broker Principal credentials to make changes to ZK. 



was (Author: omkreddy):
Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need boker Pricipal credentials to make changes to ZK. 


> Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin
> ---
>
> Key: KAFKA-5590
> URL: https://issues.apache.org/jira/browse/KAFKA-5590
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
> Environment: kafka and ranger under ambari
>Reporter: Chaofeng Zhao
>
> Hi:
> Recently I develop some applications about kafka under ranger. But when I 
> set enable ranger kafka plugin I can not delete kafka topic completely even 
> though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
> plugin it must be authrized. How can I delete kafka topic completely under 
> ranger. Thank you.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4425) Topic created with CreateTopic command, does not list partitions in metadata

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4425.
--
Resolution: Not A Problem

Fixed as per [~Fristi] comments

> Topic created with CreateTopic command, does not list partitions in metadata
> 
>
> Key: KAFKA-4425
> URL: https://issues.apache.org/jira/browse/KAFKA-4425
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Mark de Jong
>
> With the release of Kafka 0.10.10 there are now commands to delete and create 
> topics via the TCP protocol see (KIP 4 and KAFKA-2945)
> I've implemented this myself in my own driver. 
> When I send the following command to the current controller 
> "TopicDescriptor(topic = topic, nrPartitions = Some(3), replicationFactor = 
> Some(1), replicaAssignment = Seq.empty, config = Map())"
> I'll get back a response with NoError
> The server says this on the command line:
> [2016-11-20 17:54:19,599] INFO Topic creation 
> {"version":1,"partitions":{"2":[1003],"1":[1001],"0":[1002]}} 
> (kafka.admin.AdminUtils$)
> [2016-11-20 17:54:19,765] INFO [ReplicaFetcherManager on broker 1001] Removed 
> fetcher for partitions test212880282004727-1 
> (kafka.server.ReplicaFetcherManager)
> [2016-11-20 17:54:19,789] INFO Completed load of log test212880282004727-1 
> with 1 log segments and log end offset 0 in 17 ms (kafka.log.Log)
> [2016-11-20 17:54:19,791] INFO Created log for partition 
> [test212880282004727,1] in /kafka/kafka-logs-842dcf19f587 with properties 
> {compression.type -> producer, message.format.version -> 0.10.1-IV2, 
> file.delete.delay.ms -> 6, max.message.bytes -> 112, 
> min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, 
> min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, 
> min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, 
> unclean.leader.election.enable -> true, retention.bytes -> -1, 
> delete.retention.ms -> 8640, cleanup.policy -> [delete], flush.ms -> 
> 9223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824, 
> retention.ms -> 60480, message.timestamp.difference.max.ms -> 
> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 
> 9223372036854775807}. (kafka.log.LogManager)
> However when I immediately fetch metadata (v2 call). I either get;
> - The topic entry, but with no partitions data in it.
> - The topic entry, but with the status NotLeaderForPartition
> Is this an bug or I am missing something in my client?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1497) Change producer load-balancing algorithm in MirrorMaker

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1497.
--
Resolution: Fixed

MIrrorMaker now uses single producer instance.

> Change producer load-balancing algorithm in MirrorMaker
> ---
>
> Key: KAFKA-1497
> URL: https://issues.apache.org/jira/browse/KAFKA-1497
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Ivan Kunz
>
> Currently the MirrorMaker uses the following way of spreading the load into 
> configured producers :
> val producerId = 
> Utils.abs(java.util.Arrays.hashCode(msgAndMetadata.key)) % producers.size()
> This way if the producer side of MM uses different than the default 
> "partitioner.class" messages within the same partition can get re-ordered. 
> Also hashCode does not produce the same results on different machines 
> (verified by testing) so cannot be safely used for partitioning between 
> distributed systems connected via MM (for us message order preservation 
> within a partition is a critical feature).
> It would be great if the code above is changed to utilize the configured 
> "partitioner.class". 
> Something along the lines of  :
> At the initialization:
>   mmpartitioner = 
> Utils.createObject[Partitioner](config.partitionerClass, config.props)  
> During the processing:
> val producerId = 
> mmpartitioner.partition(msgAndMetadata.key,producers.size())
> This way the messages consumed and produced by MM can remain in the same 
> order.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1339) Time based offset retrieval seems broken

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1339.
--
Resolution: Fixed

Time-based offset retrieval is improved with the introduction of message 
timestamp.  Pl reopen if you think the issue still exists


> Time based offset retrieval seems broken
> 
>
> Key: KAFKA-1339
> URL: https://issues.apache.org/jira/browse/KAFKA-1339
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
> Environment: Linux
>Reporter: Frank Varnavas
>Priority: Minor
>
> The kafka PartitionOffsetRequest takes a time parameter.  It seems broken to 
> me.
> There are two magic values
>   -2 returns the oldest  available offset
>   -1 returns the newest available offset
>   Otherwise the value is time since epoch in millisecs 
> (System.currentTimeMillis())
> The granularity is limited to the granularity of the log files
> These are the log segments for the partition I tested
>   Time now is about 17:07
>   Time shown is last modify time
>   File name has the starting offset number
>   You can see that the current one started about 13:40
> 1073742047 Mar 24 02:52 04740823.log
> 1073759588 Mar 24 11:25 04831581.log
> 1073782532 Mar 24 16:31 04916313.log
> 1073741985 Mar 25 09:11 05066939.log
> 1073743756 Mar 25 13:39 05158529.log
>  778424349 Mar 25 17:07 05214225.log
> The below shows the returned offset for an input time = (current time - 
> [0..23] hours)
> Even 1 second less than the current time returns the previous segment, even 
> though that segment ended 2.5 hours earlier.
> I think the result is off by 1 log segment. i.e. offset 1-3 should have been 
> from 5214225, 4-7 should have been from 5158529
> 0 -> 5214225
> 1 -> 5158529
> 2 -> 5158529
> 3 -> 5158529
> 4 -> 5066939
> 5 -> 5066939
> 6 -> 5066939
> 7 -> 5066939
> 8 -> 4973490
> 9 -> 4973490
> 10 -> 4973490
> 11 -> 4973490
> 12 -> 4973490
> 13 -> 4973490
> 14 -> 4973490
> 15 -> 4973490
> 16 -> 4916313
> 17 -> 4916313
> 18 -> 4916313
> 19 -> 4916313
> 20 -> 4916313
> 21 -> 4916313
> 22 -> 4916313
> 23 -> 4916313



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-962) Add list topics to ClientUtils

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-962.
-
Resolution: Fixed

Topic management methods are added to new admin client.

> Add list topics to ClientUtils
> --
>
> Key: KAFKA-962
> URL: https://issues.apache.org/jira/browse/KAFKA-962
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Jakob Homan
>Assignee: Jakob Homan
>
> Currently there is no programmatic way to get a list of topics supported 
> directly by Kafka (one can talk to ZooKeeper directly).  There is a CLI tool 
> for this ListTopicCommand, but it'd be good to provide this directly to 
> clients as an API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-958) Please compile list of key metrics on the broker and client side and put it on a wiki

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-958.
-
Resolution: Fixed

Key metrics are listed on monitoring section of  Kafka documentation page

> Please compile list of key metrics on the broker and client side and put it 
> on a wiki
> -
>
> Key: KAFKA-958
> URL: https://issues.apache.org/jira/browse/KAFKA-958
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.0
>Reporter: Vadim
>Assignee: Joel Koshy
>Priority: Minor
>
> Please compile list of important metrics that need to be monitored by 
> companies  to insure healthy operation of the kafka service



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3332) Consumer can't consume messages from zookeeper chroot

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3332.
--
Resolution: Cannot Reproduce

> Consumer can't consume messages from zookeeper chroot
> -
>
> Key: KAFKA-3332
> URL: https://issues.apache.org/jira/browse/KAFKA-3332
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.2, 0.9.0.1
> Environment: RHEL 6.X, OS X
>Reporter: Sergey Vergun
>Assignee: Neha Narkhede
>
> I have faced issue when consumer can't consume messages from zookeeper 
> chroot. It was tested on Kafka 0.8.2.2 and Kafka 0.9.0.1
> My zookeeper options into server.properties:
> $cat config/server.properties | grep zookeeper
> zookeeper.connect=localhost:2181/kafka-cluster/kafka-0.9.0.1
> zookeeper.session.timeout.ms=1
> zookeeper.connection.timeout.ms=1
> zookeeper.sync.time.ms=2000
> I can create successfully a new topic
> $kafka-topics.sh --create --partition 3 --replication-factor 1 --topic 
> __TEST-Topic_1 --zookeeper localhost:2181/kafka-cluster/kafka-0.9.0.1
> Created topic "__TEST-Topic_1".
> and produce messages into it
> $kafka-console-producer.sh --topic __TEST-Topic_1 --broker-list localhost:9092
> 1
> 2
> 3
> 4
> 5
> In Kafka Manager I see that messages was delivered:
> Sum of partition offsets  5
> But I can't consume the messages via kafka-console-consumer
> $kafka-console-consumer.sh --topic TEST-Topic_1 --zookeeper 
> localhost:2181/kafka-cluster/kafka-0.9.0.1 --from-beginning
> The consumer is present in zookeeper
> [zk: localhost:2181(CONNECTED) 10] ls /kafka-cluster/kafka-0.9.0.1/consumers
> [console-consumer-62895] 
> [zk: localhost:2181(CONNECTED) 12] ls 
> /kafka-cluster/kafka-0.9.0.1/consumers/console-consumer-62895/ids
> [console-consumer-62895_SV-Macbook-1457097451996-64640cc1] 
> If I reconfigure kafka cluster with zookeeper chroot "/" then everything is 
> ok.
> $cat config/server.properties | grep zookeeper
> zookeeper.connect=localhost:2181
> zookeeper.session.timeout.ms=1
> zookeeper.connection.timeout.ms=1
> zookeeper.sync.time.ms=2000
> $kafka-console-producer.sh --topic __TEST-Topic_1 --broker-list localhost:9092
> 1
> 2
> 3
> 4
> 5
> $kafka-console-consumer.sh --topic TEST-Topic_1 --zookeeper localhost:2181 
> --from-beginning
> 1
> 2
> 3
> 4
> 5
> Is it bug or my mistake?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-627) Make UnknownTopicOrPartitionException a WARN in broker

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-627.
-
Resolution: Fixed

Not observed on latest versions

> Make UnknownTopicOrPartitionException a WARN in broker
> --
>
> Key: KAFKA-627
> URL: https://issues.apache.org/jira/browse/KAFKA-627
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
> Environment: Kafka 0.8, RHEL6, Java 1.6
>Reporter: Chris Riccomini
>
> Currently, when sending messages to a topic that doesn't yet exist, the 
> broker spews out these "errors" as it tries to auto-create new topics. I 
> spoke with Neha, and she said that this should be a warning, not an error.
> Could you please change it to something less scary, if, in fact, it's not 
> scary.
> 2012/11/14 22:38:53.238 INFO [LogManager] [kafka-request-handler-6] [kafka] 
> []  [Log Manager on Broker 464] Created log for 'firehoseReads'-5
> 2012/11/14 22:38:53.241 WARN [HighwaterMarkCheckpoint] 
> [kafka-request-handler-6] [kafka] []  No previously checkpointed 
> highwatermark value found for topic firehoseReads partition 5. Returning 0 as 
> the highwatermark
> 2012/11/14 22:38:53.242 INFO [Log] [kafka-request-handler-6] [kafka] []  
> [Kafka Log on Broker 464], Truncated log segment 
> /export/content/kafka/i001_caches/firehoseReads-5/.log to 
> target offset 0
> 2012/11/14 22:38:53.242 INFO [ReplicaFetcherManager] 
> [kafka-request-handler-6] [kafka] []  [ReplicaFetcherManager on broker 464] 
> adding fetcher on topic firehoseReads, partion 5, initOffset 0 to broker 466 
> with fetcherId 0
> 2012/11/14 22:38:53.248 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-466-0-on-broker-464] [kafka] []  
> [ReplicaFetcherThread-466-0-on-broker-464], error for firehoseReads 5 to 
> broker 466
> kafka.common.UnknownTopicOrPartitionException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at java.lang.Class.newInstance0(Class.java:355)
> at java.lang.Class.newInstance(Class.java:308)
> at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:68)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5$$anonfun$apply$3.apply(AbstractFetcherThread.scala:124)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5$$anonfun$apply$3.apply(AbstractFetcherThread.scala:124)
> at kafka.utils.Logging$class.error(Logging.scala:102)
> at kafka.utils.ShutdownableThread.error(ShutdownableThread.scala:23)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5.apply(AbstractFetcherThread.scala:123)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$doWork$5.apply(AbstractFetcherThread.scala:99)
> at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:125)
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:99)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:50)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2966) 0.9.0 docs missing upgrade notes regarding replica lag

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2966.
--
Resolution: Fixed

> 0.9.0 docs missing upgrade notes regarding replica lag
> --
>
> Key: KAFKA-2966
> URL: https://issues.apache.org/jira/browse/KAFKA-2966
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aditya Auradkar
>
> We should document that:
> * replica.lag.max.messages is gone
> * replica.lag.time.max.ms has a new meaning
> In the upgrade section. People can get caught by surprise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2829) Inconsistent naming in {Producer,Consumer} Callback interfaces

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2829.
--
Resolution: Won't Fix

These are public interfaces heavily used by users. It's not appropriate to 
change now.  Please reopen if you think otherwise.


> Inconsistent naming in {Producer,Consumer} Callback interfaces
> --
>
> Key: KAFKA-2829
> URL: https://issues.apache.org/jira/browse/KAFKA-2829
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Mathias Söderberg
>Assignee: Neha Narkhede
>Priority: Minor
>
> The Callback interface for the "new" producer has a method called 
> "onCompletion" while the OffsetCommitCallback for the new consumer has a 
> method called "onComplete".
> Perhaps they should be using the same naming convention to avoid confusion?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2565) Offset Commit is not working if multiple consumers try to commit the offset

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2565.
--
Resolution: Cannot Reproduce

may be related to deployment issue. Pl reopen if you think the issue still 
exists


> Offset Commit is not working if multiple consumers try to commit the offset
> ---
>
> Key: KAFKA-2565
> URL: https://issues.apache.org/jira/browse/KAFKA-2565
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1, 0.8.2.1, 0.8.2.2
>Reporter: Sreenivasulu Nallapati
>Assignee: Neha Narkhede
>
> We are seeing some strange behaviour with commitOffsets() method of 
> kafka.javaapi.consumer.ConsumerConnector. We committing the offsets to 
> zookeeper at the end of the consumer batch. We are running multiple consumers 
> for the same topic.
> Test details: 
> 1.Created a topic with three partitions
> 2.Started three consumers (cronjob) at the same time. The aim is that 
> each consumer to process one partition.
> 3.Each consumer at the end of the batch, it will call the commitOffsets() 
> method on kafka.javaapi.consumer.ConsumerConnector
> 4.The offsets are getting properly updated in zookeeper if we run the 
> consumers for small set (say 1000 messages) of messages.
> 5.But for larger number of messages, commit offset is not working as 
> expected…sometimes only two offsets are properly committing and other one 
> remains as it was.
> 6.Please see the below example
> Partition: 0 Latest Offset: 1057585
> Partition: 1 Latest Offset: 1057715
> Partition: 2 Latest Offset: 1057590
> Earliest Offset after all consumers completed: {0=1057585, 1=724375, 
> 2=1057590}
> Highlighted in red supposed to be committed as 1057715 but it did not.
> Please check if it is bug with multiple consumers. When multiple consumers 
> are trying to update the same path in Zookeper, is there any synchronization 
> issue?
> Kafka Cluster details
> 1 zookeeper
> 3 brokers



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2560) Fatal error during KafkaServer startup because of Map failed error.

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2560.
--
Resolution: Fixed

This is due to java.lang.OutOfMemoryError.  Pl reopen if you think the issue 
still exists


> Fatal error during KafkaServer startup because of Map failed error.
> ---
>
> Key: KAFKA-2560
> URL: https://issues.apache.org/jira/browse/KAFKA-2560
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2.1
> Environment: Linux 
>Reporter: Bo Wang
>Assignee: Jay Kreps
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I have 3 kafka nodes,  
> create 30 topics ,every topic has 100 pations, and replica factor is 2.
> Kafka server start failed,
> 2015-09-21 10:28:35,668 | INFO  | pool-2-thread-1 | Recovering unflushed 
> segment 0 in log testTopic_14-34. | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> 2015-09-21 10:28:35,942 | ERROR | main | There was an error in one of the 
> threads during logs loading: java.io.IOException: Map failed | 
> kafka.utils.Logging$class.error(Logging.scala:97)
> 2015-09-21 10:28:35,943 | INFO  | pool-2-thread-5 | Recovering unflushed 
> segment 0 in log testTopic_17-23. | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> 2015-09-21 10:28:35,944 | INFO  | pool-2-thread-5 | Completed load of log 
> testTopic_17-23 with log end offset 0 | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> 2015-09-21 10:28:35,945 | FATAL | main | [Kafka Server 54], Fatal error 
> during KafkaServer startup. Prepare to shutdown | 
> kafka.utils.Logging$class.fatal(Logging.scala:116)
> java.io.IOException: Map failed
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at kafka.log.Log.loadSegments(Log.scala:179)
> at kafka.log.Log.(Log.scala:67)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$7$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:142)
> at kafka.utils.Utils$$anon$1.run(Utils.scala:54)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> ... 13 more
> 2015-09-21 10:28:35,946 | INFO  | pool-2-thread-5 | Recovering unflushed 
> segment 0 in log testTopic_25-77. | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> 2015-09-21 10:28:35,949 | INFO  | main | [Kafka Server 54], shutting down | 
> kafka.utils.Logging$class.info(Logging.scala:68)
> Kafka server host's top infomation below:
> top - 17:16:23 up 53 min,  6 users,  load average: 0.42, 0.99, 1.19
> Tasks: 215 total,   1 running, 214 sleeping,   0 stopped,   0 zombie
> Cpu(s):  4.5%us,  2.4%sy,  0.0%ni, 92.9%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem: 40169M total, 6118M used,34050M free,9M buffers
> Swap:0M total,0M used,0M free,  431M cached



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2577) one node of Kafka cluster cannot process produce request

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2577.
--
Resolution: Cannot Reproduce

This might have fixed in latest versions. Pl reopen if you think the issue 
still exists


> one node of Kafka cluster cannot process produce request
> 
>
> Key: KAFKA-2577
> URL: https://issues.apache.org/jira/browse/KAFKA-2577
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , replication
>Affects Versions: 0.8.1.1
>Reporter: Ray
>Assignee: Jun Rao
>
> We had 3 nodes for kafka cluster, suddenly one node cannot accept produce r 
> request, here is the log:
> [2015-09-21 04:56:32,413] WARN [KafkaApi-0] Produce request with correlation 
> id 9178992 from client  on partition [topic_name,3] failed due to Leader not 
> local for partition [topic_name,3] on broker 0 (kafka.server.KafkaApis)
> after restarting that node, it still cannot work and I saw different log:
> [2015-09-21 20:38:16,791] WARN [KafkaApi-0] Produce request with correlation 
> id 9661337 from client  on partition [topic_name,3] failed due to Topic 
> topic_name either doesn't exist or is in the process of being deleted 
> (kafka.server.KafkaApis)
> it got fixed after rolling all the kafka nodes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2451) Exception logged but not managed

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2451.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> Exception logged but not managed
> 
>
> Key: KAFKA-2451
> URL: https://issues.apache.org/jira/browse/KAFKA-2451
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1
> Environment: Windows + Java
>Reporter: Gwenhaël PASQUIERS
>Assignee: Jun Rao
>
> We've been having issues with java-snappy and it's native dll.
> To make it short : we have exceptions when serializing the message.
> We are using kafka producer it in Camel.
> The problem is that kafka thinks that the message was worrectly sent, and 
> returns no error: camel consumes the files even though kafka coult not send 
> the messages.
> Where the issue lies (if i'm correct):
> In DefaultEventHandler line 115 with tag 0.8.1 the exception that is thrown 
> by groupMessageToSet() is catched and logged. The return value of the 
> function dispatchSerializedData() is used to determine if the send was 
> successfull (if (outstandingProduceRequest.size >0) { ...}).
> BUT in this case I'm suspecting that, not even one message could be 
> serialized and added to  "failedProduceRequests". So the code that called 
> "dispatchSerializedData" thinks everything is OK though it's not.
> The producer could behave better and propagate the error properly. Since, it 
> could lead to pure data loss.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2414) Running kafka-producer-perf-test.sh with " --messages 10000000 --message-size 1000 --new-producer" will get WARN Error in I/O.

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2414.
--
Resolution: Cannot Reproduce

 This might have been fixed in latest versions. Pl reopen if you think the 
issue still exists


> Running kafka-producer-perf-test.sh  with " --messages 1000 
> --message-size 1000  --new-producer" will get WARN Error in I/O.
> 
>
> Key: KAFKA-2414
> URL: https://issues.apache.org/jira/browse/KAFKA-2414
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>  Labels: performance
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Running kafka-producer-perf-test.sh  with " --messages 1000 
> --message-size 1000  --new-producer" will get WARN Error in I/O:
> java.io.EOFException
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3885) Kafka new producer cannot failover

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3885.
--
Resolution: Duplicate

> Kafka new producer cannot failover
> --
>
> Key: KAFKA-3885
> URL: https://issues.apache.org/jira/browse/KAFKA-3885
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.2, 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: wateray
>
> This bug can reproduce by the following steps.
> The cluster has 2 brokers.
>  a) start a new producer, then send messages, it works well.
>  b) Then kill one broker,  it works well.
>  c) Then restart the broker,  it works well.
>  d) Then kill the other broker,  the producer can't failover.
> The the producer print log infinity.
> org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) 
> expired due to timeout while requesting metadata from brokers for 
> lwb_test_p50_r2-29
> 
> When producer sends msg, it detected that metadata should update.
> But at this code, class: NetworkClient ,method: leastLoadedNode
> List nodes = this.metadataUpdater.fetchNodes();
> nodes only return one result, and the returned node is the killed node, so 
> the producer cannot failover!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4109) kafka client send msg exception

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4109.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> kafka client send msg exception
> ---
>
> Key: KAFKA-4109
> URL: https://issues.apache.org/jira/browse/KAFKA-4109
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.0.0
> Environment: java8
> kafka cluster
>Reporter: frank
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Batch Expired 
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:56)
>  
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:43)
>  
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
>  
>   at 
> com.longsheng.basicCollect.kafka.KafkaProducer.publishMessage(KafkaProducer.java:44)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.ParentBasic.publish(ParentBasic.java:60)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.ParentBasic.parseIncJson(ParentBasic.java:119)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.coke.price.SecondPrice.collectData(SecondPrice.java:41)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.coke.price.SecondPrice.process(SecondPrice.java:49)
>  
>   at 
> com.longsheng.basicCollect.crawler.processor.dzwbigdata.DzwProcessor.process(DzwProcessor.java:33)
>  
>   at 
> com.longsheng.basicCollect.timer.CollectDzwBigData.execute(CollectDzwBigData.java:14)
>  
>   at org.quartz.core.JobRunShell.run(JobRunShell.java:202) 
>   at 
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) 
> Caused by: org.apache.kafka.common.errors.TimeoutException: Batch Expired
> the exception is arbitrarily!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4350) Can't mirror from Kafka 0.9 to Kafka 0.10.1

2017-08-24 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4350.
--
Resolution: Won't Fix

Closing as per comments.

> Can't mirror from Kafka 0.9 to Kafka 0.10.1
> ---
>
> Key: KAFKA-4350
> URL: https://issues.apache.org/jira/browse/KAFKA-4350
> Project: Kafka
>  Issue Type: Bug
>Reporter: Emanuele Cesena
>
> I'm running 2 clusters: K9 with Kafka 0.9 and K10 with Kafka 0.10.1.
> In K10, I've set up mirror maker to clone a topic from K9 to K10.
> Mirror maker immediately fails while starting, any suggestion? Following 
> error message and configs.
> Error message:
> {code:java} 
> [2016-10-26 23:54:01,663] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'cluster_id': Error reading string of length 418, only 43 bytes available
> at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
> at 
> org.apache.kafka.clients.NetworkClient.parseResponse(NetworkClient.java:380)
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:449)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:269)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:248)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
> at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.receive(MirrorMaker.scala:582)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:431)
> [2016-10-26 23:54:01,679] FATAL [mirrormaker-thread-0] Mirror maker thread 
> exited abnormally, stopping the whole mirror maker. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> {code} 
> Consumer:
> {code:} 
> group.id=mirrormaker001
> client.id=mirrormaker001
> bootstrap.servers=...K9...
> security.protocol=PLAINTEXT
> auto.offset.reset=earliest
> {code} 
> (note that I first run without client.id, then tried adding a client.id 
> because -- same error in both cases)
> Producer:
> {code:}
> bootstrap.servers=...K10...
> security.protocol=PLAINTEXT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4869) 0.10.2.0 release notes incorrectly include KIP-115

2017-08-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4869.
--
Resolution: Fixed
  Assignee: Manikumar

> 0.10.2.0 release notes incorrectly include KIP-115
> --
>
> Key: KAFKA-4869
> URL: https://issues.apache.org/jira/browse/KAFKA-4869
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.10.2.0
>Reporter: Yeva Byzek
>Assignee: Manikumar
>Priority: Minor
>
> From http://kafka.apache.org/documentation.html :
> bq. The offsets.topic.replication.factor broker config is now enforced upon 
> auto topic creation. Internal auto topic creation will fail with a 
> GROUP_COORDINATOR_NOT_AVAILABLE error until the cluster size meets this 
> replication factor requirement.
> Even though this feature 
> [KIP-115|https://cwiki.apache.org/confluence/display/KAFKA/KIP-115%3A+Enforce+offsets.topic.replication.factor+upon+__consumer_offsets+auto+topic+creation]
>  did not make it into 0.10.2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5788) "IllegalArgumentException: long is not a value type" when running ReassignPartitionsCommand

2017-08-25 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16141959#comment-16141959
 ] 

Manikumar commented on KAFKA-5788:
--

May be related to joptsimple library version mismatch.   Can you try with jopt 
library corresponding to the Kafka release?
Kafka 0.10.2.0 uses jopt-simple-4.9.jar and Kafka 0.11.0.0 uses 
jopt-simple-5.0.4.jar. 

> "IllegalArgumentException: long is not a value type" when running 
> ReassignPartitionsCommand
> ---
>
> Key: KAFKA-5788
> URL: https://issues.apache.org/jira/browse/KAFKA-5788
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.11.0.0
> Environment: Windows 
>Reporter: Ansel Zandegran
>
> *When trying to run ReassignPartitionsCommand with the following statements,*
> String[] reassignCmdArgs = { "--reassignment-json-file=" + 
> Paths.get(reassignmentConfigFileName),
>   "--zookeeper=" + 
> client.getZookeeperClient().getCurrentConnectionString(), "--execute", 
> "--throttle="+1000 };
>   logger.debug("Calling 
> ReassignPartitionsCommand with args:{}", Arrays.toString(reassignCmdArgs));
>   
> ReassignPartitionsCommand.main(reassignCmdArgs);
> *I get the following error*
> 2017-08-22 15:57:28 DEBUG ZookeeperBackedAdoptionLogicImpl:320 - Calling 
> ReassignPartitionsCommand with 
> args:[--reassignment-json-file=partitions-to-move.json.1503417447767, 
> --zookeeper=172.31.14.207:2181, --execute]
> java.lang.IllegalArgumentException: long is not a value type
> at joptsimple.internal.Reflection.findConverter(Reflection.java:66)
> at 
> joptsimple.ArgumentAcceptingOptionSpec.ofType(ArgumentAcceptingOptionSpec.java:111)
> at 
> kafka.admin.ReassignPartitionsCommand$ReassignPartitionsCommandOptions.(ReassignPartitionsCommand.scala:301)
> at 
> kafka.admin.ReassignPartitionsCommand$.validateAndParseArgs(ReassignPartitionsCommand.scala:236)
> at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:34)
> at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> at 
> rebalancer.core.ZookeeperBackedAdoptionLogicImpl.reassignPartitionToLocalBroker(ZookeeperBackedAdoptionLogicImpl.java:321)
> at 
> rebalancer.core.ZookeeperBackedAdoptionLogicImpl.adoptRemotePartition(ZookeeperBackedAdoptionLogicImpl.java:267)
> at 
> rebalancer.core.ZookeeperBackedAdoptionLogicImpl.run(ZookeeperBackedAdoptionLogicImpl.java:118)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5764) KafkaShortnamer should allow for case-insensitive matches

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5764:


Assignee: Manikumar

> KafkaShortnamer should allow for case-insensitive matches 
> --
>
> Key: KAFKA-5764
> URL: https://issues.apache.org/jira/browse/KAFKA-5764
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Ryan P
>Assignee: Manikumar
>
> Currently it does not appear that the KafkaShortnamer allows for case 
> insensitive search and replace rules. 
> It would be good to match the functionality provided by HDFS as operators are 
> familiar with this. This also makes it easier to port auth_to_local rules 
> from your existing hdfs configurations to your new kafka configuration. 
> HWX auth_to_local guide for reference
> https://community.hortonworks.com/articles/14463/auth-to-local-rules-syntax.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5764) KafkaShortnamer should allow for case-insensitive matches

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-5764:
-
Summary: KafkaShortnamer should allow for case-insensitive matches   (was: 
KafkaShortnamer should allow for case inensitive matches )

> KafkaShortnamer should allow for case-insensitive matches 
> --
>
> Key: KAFKA-5764
> URL: https://issues.apache.org/jira/browse/KAFKA-5764
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Ryan P
>
> Currently it does not appear that the KafkaShortnamer allows for case 
> insensitive search and replace rules. 
> It would be good to match the functionality provided by HDFS as operators are 
> familiar with this. This also makes it easier to port auth_to_local rules 
> from your existing hdfs configurations to your new kafka configuration. 
> HWX auth_to_local guide for reference
> https://community.hortonworks.com/articles/14463/auth-to-local-rules-syntax.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5649) Producer is being closed generating ssl exception

2017-08-29 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16145032#comment-16145032
 ] 

Manikumar commented on KAFKA-5649:
--

Can you enable Kafka debug logs and SSL debugs logs (-Djavax.net.debug=all)? 
These logs may reveal some underlying channel exceptions.

> Producer is being closed generating ssl exception
> -
>
> Key: KAFKA-5649
> URL: https://issues.apache.org/jira/browse/KAFKA-5649
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.1
> Environment: Spark 2.2.0 and kafka 0.10.2.0
>Reporter: Pablo Panero
>
> On a streaming job using built-in kafka source and sink (over SSL), with I am 
> getting the following exception:
> On a streaming job using built-in kafka source and sink (over SSL), with  I 
> am getting the following exception:
> Config of the source:
> {code:java}
> val df = spark.readStream
>   .format("kafka")
>   .option("kafka.bootstrap.servers", config.bootstrapServers)
>   .option("failOnDataLoss", value = false)
>   .option("kafka.connections.max.idle.ms", 360)
>   //SSL: this only applies to communication between Spark and Kafka 
> brokers; you are still responsible for separately securing Spark inter-node 
> communication.
>   .option("kafka.security.protocol", "SASL_SSL")
>   .option("kafka.sasl.mechanism", "GSSAPI")
>   .option("kafka.sasl.kerberos.service.name", "kafka")
>   .option("kafka.ssl.truststore.location", "/etc/pki/java/cacerts")
>   .option("kafka.ssl.truststore.password", "changeit")
>   .option("subscribe", config.topicConfigList.keys.mkString(","))
>   .load()
> {code}
> Config of the sink:
> {code:java}
> .writeStream
> .option("checkpointLocation", 
> s"${config.checkpointDir}/${topicConfig._1}/")
> .format("kafka")
> .option("kafka.bootstrap.servers", config.bootstrapServers)
> .option("kafka.connections.max.idle.ms", 360)
> //SSL: this only applies to communication between Spark and Kafka 
> brokers; you are still responsible for separately securing Spark inter-node 
> communication.
> .option("kafka.security.protocol", "SASL_SSL")
> .option("kafka.sasl.mechanism", "GSSAPI")
> .option("kafka.sasl.kerberos.service.name", "kafka")
> .option("kafka.ssl.truststore.location", "/etc/pki/java/cacerts")
> .option("kafka.ssl.truststore.password", "changeit")
> .start()
> {code}
> And in some cases it throws the exception making the spark job stuck in that 
> step. Exception stack trace is the following:
> {code:java}
> 17/07/18 10:11:58 WARN SslTransportLayer: Failed to send SSL Close message 
> java.io.IOException: Broken pipe
>   at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>   at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>   at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:195)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:163)
>   at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:731)
>   at 
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:54)
>   at org.apache.kafka.common.network.Selector.doClose(Selector.java:540)
>   at org.apache.kafka.common.network.Selector.close(Selector.java:531)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:378)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1047)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>   at 
> org.apache.spark.sql.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:298)
>   at 
> org.apache.spark.sql.kafka010.CachedKafkaConsumer.org$apache$spark$sql$kafka010$CachedKafkaConsumer$$fetchData(CachedKafkaConsumer.scala:206)
>   at 
> org.apache.spark.sql.kafka010.CachedKafkaConsumer$$anonfun$get$1.apply(CachedKafkaConsumer.scala:117)
>   at 
> org.apache.spark.sql.kafka010.CachedKafkaConsumer$$anonfun$get$1.apply(CachedKafkaConsumer.scala:106)
>   at 
> org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:85)
>   at 
> org.apache.spark.sql.kafka010.CachedKafkaConsumer.runUninte

[jira] [Updated] (KAFKA-5800) FileAreadyExist when writing tmp Shutdown Server Suddenly

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-5800:
-
Labels: windows  (was: )

> FileAreadyExist when writing tmp Shutdown Server Suddenly
> -
>
> Key: KAFKA-5800
> URL: https://issues.apache.org/jira/browse/KAFKA-5800
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.1
> Environment: Windows Server 2012
>Reporter: Ahmed Mohsen
>  Labels: windows
>
> the server shutdown suddenly with this Exception
> {code:java}
> [2017-08-28 11:41:09,975] FATAL [Replica Manager on Broker 0]: Error writing 
> to highwatermark file:  (kafka.server.ReplicaManager)
> java.nio.file.FileAlreadyExistsException: 
> D:\tmp\kafka-logs\replication-offset-checkpoint.tmp -> 
> D:\tmp\kafka-logs\replication-offset-checkpoint
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:81)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>   at java.nio.file.Files.move(Files.java:1395)
>   at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:711)
>   at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:76)
>   at 
> kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$5(ReplicaManager.scala:946)
>   at 
> kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$5$adapted(ReplicaManager.scala:943)
>   at kafka.server.ReplicaManager$$Lambda$862/1600997378.apply(Unknown 
> Source)
>   at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>   at 
> scala.collection.TraversableLike$WithFilter$$Lambda$234/1781493632.apply(Unknown
>  Source)
>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:120)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>   at 
> kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:943)
>   at 
> kafka.server.ReplicaManager.$anonfun$startHighWaterMarksCheckPointThread$1(ReplicaManager.scala:163)
>   at 
> kafka.server.ReplicaManager$$Lambda$854/1298788908.apply$mcV$sp(Unknown 
> Source)
>   at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>   at 
> kafka.utils.KafkaScheduler$$Lambda$269/137460818.apply$mcV$sp(Unknown Source)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>   Suppressed: java.nio.file.AccessDeniedException: 
> D:\tmp\kafka-logs\replication-offset-checkpoint.tmp -> 
> D:\tmp\kafka-logs\replication-offset-checkpoint
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>   at java.nio.file.Files.move(Files.java:1395)
>   at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:708)
>   ... 21 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4695) Can't start Kafka server after Blue Screen

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-4695:
-
Labels: windows  (was: )

> Can't start Kafka server after Blue Screen
> --
>
> Key: KAFKA-4695
> URL: https://issues.apache.org/jira/browse/KAFKA-4695
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.1.1
> Environment: Windows 10
>Reporter: Kevin Tippenhauer
>Priority: Critical
>  Labels: windows
>
> When I get a blue screen (my laptop does that quite often) I can't start my 
> Karaf servers anymore. Removing the *.index files as mentioned on 
> [KAFKA-1554|https://issues.apache.org/jira/browse/KAFKA-1554] didn't work.
> My only solution until now is removing all data under log.dirs.
> The most likely important part of the log:
> (NOTE: "Das Argument ist ungültig" is german and means "illegal argument")
> {code:log|borderStyle=solid}
> [2017-01-25 06:54:34,597] ERROR There was an error in one of the threads 
> during logs loading: java.io.IOException: Das Argument ist ungültig 
> (kafka.log.LogManager)
> [2017-01-25 06:54:34,598] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> java.io.IOException: Das Argument ist ungültig
> {code}
> Normal debug information:
> {code:title=server.log|borderStyle=solid}
> kafka_2.11-0.10.1.1# cat logs/server.log
> [2017-01-25 06:54:33,819] INFO KafkaConfig values:
> advertised.host.name = null
> advertised.listeners = null
> advertised.port = null
> authorizer.class.name =
> auto.create.topics.enable = true
> auto.leader.rebalance.enable = true
> background.threads = 10
> broker.id = 0
> broker.id.generation.enable = true
> broker.rack = null
> compression.type = producer
> connections.max.idle.ms = 60
> controlled.shutdown.enable = true
> controlled.shutdown.max.retries = 3
> controlled.shutdown.retry.backoff.ms = 5000
> controller.socket.timeout.ms = 3
> default.replication.factor = 1
> delete.topic.enable = false
> fetch.purgatory.purge.interval.requests = 1000
> group.max.session.timeout.ms = 30
> group.min.session.timeout.ms = 6000
> host.name =
> inter.broker.protocol.version = 0.10.1-IV2
> leader.imbalance.check.interval.seconds = 300
> leader.imbalance.per.broker.percentage = 10
> listeners = PLAINTEXT://127.0.0.1:9092
> log.cleaner.backoff.ms = 15000
> log.cleaner.dedupe.buffer.size = 134217728
> log.cleaner.delete.retention.ms = 8640
> log.cleaner.enable = true
> log.cleaner.io.buffer.load.factor = 0.9
> log.cleaner.io.buffer.size = 524288
> log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
> log.cleaner.min.cleanable.ratio = 0.5
> log.cleaner.min.compaction.lag.ms = 0
> log.cleaner.threads = 1
> log.cleanup.policy = [delete]
> log.dir = /tmp/kafka-logs
> log.dirs = /mnt/d/Programme/kafka_2.11-0.10.1.1/data/kafka-logs
> log.flush.interval.messages = 9223372036854775807
> log.flush.interval.ms = null
> log.flush.offset.checkpoint.interval.ms = 6
> log.flush.scheduler.interval.ms = 9223372036854775807
> log.index.interval.bytes = 4096
> log.index.size.max.bytes = 10485760
> log.message.format.version = 0.10.1-IV2
> log.message.timestamp.difference.max.ms = 9223372036854775807
> log.message.timestamp.type = CreateTime
> log.preallocate = false
> log.retention.bytes = -1
> log.retention.check.interval.ms = 30
> log.retention.hours = 168
> log.retention.minutes = null
> log.retention.ms = null
> log.roll.hours = 168
> log.roll.jitter.hours = 0
> log.roll.jitter.ms = null
> log.roll.ms = null
> log.segment.bytes = 1073741824
> log.segment.delete.delay.ms = 6
> max.connections.per.ip = 2147483647
> max.connections.per.ip.overrides =
> message.max.bytes = 112
> metric.reporters = []
> metrics.num.samples = 2
> metrics.sample.window.ms = 3
> min.insync.replicas = 1
> num.io.threads = 8
> num.network.threads = 3
> num.partitions = 1
> num.recovery.threads.per.data.dir = 1
> num.replica.fetchers = 1
> offset.metadata.max.bytes = 4096
> offsets.commit.required.acks = -1
> offsets.commit.timeout.ms = 5000
> offsets.load.buffer.size = 5242880
> offsets.retention.check.interval.ms = 60
> offsets.retentio

[jira] [Resolved] (KAFKA-4634) Issue of one kafka brokers not listed in zookeeper

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4634.
--
Resolution: Fixed

Similar issue fixed in KAFKA-1387/Newer versions. Pl reopen if you think the 
issue still exists


> Issue of one kafka brokers not listed in zookeeper
> --
>
> Key: KAFKA-4634
> URL: https://issues.apache.org/jira/browse/KAFKA-4634
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
> Environment: Zookeeper version: 3.4.6-1569965, built on 02/20/2014 
> 09:09 GMT
> kafka_2.10-0.8.2.1
>Reporter: Maharajan Shunmuga Sundaram
>
> Hi,
> We have incident that one of the 10 brokers not listed in brokers list of 
> zookeeper.
> This is verified by running following command
> >> echo dump | nc cz2 2181
> SessionTracker dump:
> Session Sets (4):
> 0 expire at Fri Jan 13 22:32:14 EST 2017:
> 0 expire at Fri Jan 13 22:32:16 EST 2017:
> 7 expire at Fri Jan 13 22:32:18 EST 2017:
> 0x259968e41e3
> 0x35996670d5d0001
> 0x35996670d5d
> 0x159966708470004
> 0x159966e4776
> 0x159966708470003
> 0x2599672df26
> 3 expire at Fri Jan 13 22:32:20 EST 2017:
> 0x159968e41dd
> 0x259966708550001
> 0x25996670855
> ephemeral nodes dump:
> Sessions with Ephemerals (9):
> 0x25996670855:
> /brokers/ids/112
> 0x259968e41e3:
> /brokers/ids/213
> 0x159968e41dd:
> /brokers/ids/19
> 0x159966708470003:
> /brokers/ids/110
> 0x35996670d5d:
> /brokers/ids/113
> /controller
> 0x259966708550001:
> /brokers/ids/111
> 0x159966708470004:
> /brokers/ids/212
> 0x2599672df26:
> /brokers/ids/29
> 0x35996670d5d0001:
> /brokers/ids/210
> --
> There are 10 sessions, but only 9 sessions are listed with brokers.
> Broker with id 211 is not listed. Session 0x159966e4776 is not shown with 
> broker id 211.
> In the broker side log, I do see it is connected
> >> zgrep "0x159966e4776" *log*
>  
> zk.log:[2017-01-13 01:05:28,513] INFO Session establishment complete on 
> server cz1/10.254.2.19:2181, sessionid = 0x159966e4776, negotiated 
> timeout = 6000 (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:38,163] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:39,101] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:40,121] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:41,770] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:42,439] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:43,235] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:44,950] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:45,837] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> .
> .
> .
> .
> zk.log:[2017-01-13 01:40:14,818] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:40:15,916] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:40:19,692] INFO Client session timed out, have not 
> heard from server in 3676ms for sessionid 0x159966e4776, closing socket 
> connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:40:20,632] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> soc

[jira] [Updated] (KAFKA-4391) On Windows, Kafka server stops with uncaught exception after coming back from sleep

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-4391:
-
Labels: windows  (was: )

> On Windows, Kafka server stops with uncaught exception after coming back from 
> sleep
> ---
>
> Key: KAFKA-4391
> URL: https://issues.apache.org/jira/browse/KAFKA-4391
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: Windows 10, jdk1.8.0_111
>Reporter: Yiquan Zhou
>  Labels: windows
>
> Steps to reproduce:
> 1. start the zookeeper
> $ bin\windows\zookeeper-server-start.bat config/zookeeper.properties
> 2. start the Kafka server with the default properties
> $ bin\windows\kafka-server-start.bat config/server.properties
> 3. put Windows into sleep mode for 1-2 hours
> 4. activate Windows again, an exception occurs in Kafka server console and 
> the server is stopped:
> {code:title=kafka console log}
> [2016-11-08 21:45:35,185] INFO Client session timed out, have not heard from 
> server in 10081379ms for sessionid 0x1584514da47, closing socket 
> connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:40,698] INFO zookeeper state changed (Disconnected) 
> (org.I0Itec.zkclient.ZkClient)
> [2016-11-08 21:45:43,029] INFO Opening socket connection to server 
> 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL 
> (unknown error) (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,044] INFO Socket connection established to 
> 127.0.0.1/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,158] INFO Unable to reconnect to ZooKeeper service, 
> session 0x1584514da47 has expired, closing socket connection 
> (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,158] INFO zookeeper state changed (Expired) 
> (org.I0Itec.zkclient.ZkClient)
> [2016-11-08 21:45:43,236] INFO Initiating client connection, 
> connectString=localhost:2181 sessionTimeout=6000 
> watcher=org.I0Itec.zkclient.ZkClient@11ca437b (org.apache.zookeeper.ZooKeeper)
> [2016-11-08 21:45:43,280] INFO EventThread shut down 
> (org.apache.zookeeper.ClientCnxn)
> log4j:ERROR Failed to rename [/controller.log] to 
> [/controller.log.2016-11-08-18].
> [2016-11-08 21:45:43,421] INFO Opening socket connection to server 
> 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL 
> (unknown error) (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,483] INFO Socket connection established to 
> 127.0.0.1/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,811] INFO Session establishment complete on server 
> 127.0.0.1/127.0.0.1:2181, sessionid = 0x1584514da470001, negotiated timeout = 
> 6000 (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,827] INFO zookeeper state changed (SyncConnected) 
> (org.I0Itec.zkclient.ZkClient)
> log4j:ERROR Failed to rename [/server.log] to [/server.log.2016-11-08-18].
> [2016-11-08 21:45:43,827] INFO Creating /controller (is it secure? false) 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-11-08 21:45:44,014] INFO Result of znode creation is: OK 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-11-08 21:45:44,014] INFO 0 successfully elected as leader 
> (kafka.server.ZookeeperLeaderElector)
> log4j:ERROR Failed to rename [/state-change.log] to 
> [/state-change.log.2016-11-08-18].
> [2016-11-08 21:45:44,421] INFO re-registering broker info in ZK for broker 0 
> (kafka.server.KafkaHealthcheck)
> [2016-11-08 21:45:44,436] INFO Creating /brokers/ids/0 (is it secure? false) 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-11-08 21:45:44,686] INFO Result of znode creation is: OK 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-11-08 21:45:44,686] INFO Registered broker 0 at path /brokers/ids/0 
> with addresses: PLAINTEXT -> EndPoint(192.168.0.15,9092,PLAINTEXT) 
> (kafka.utils.ZkUtils)
> [2016-11-08 21:45:44,686] INFO done re-registering broker 
> (kafka.server.KafkaHealthcheck)
> [2016-11-08 21:45:44,686] INFO Subscribing to /brokers/topics path to watch 
> for new topics (kafka.server.KafkaHealthcheck)
> [2016-11-08 21:45:45,046] INFO [ReplicaFetcherManager on broker 0] Removed 
> fetcher for partitions [test,0] (kafka.server.ReplicaFetcherManager)
> [2016-11-08 21:45:45,061] INFO New leader is 0 
> (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
> [2016-11-08 21:45:47,325] ERROR Uncaught exception in scheduled task 
> 'kafka-recovery-point-checkpoint' (kafka.utils.KafkaScheduler)
> java.io.IOException: File rename from 
> D:\tmp\kafka-logs\recovery-point-offset-checkpoint.tmp to 
> D:\tmp\kafka-logs\recovery-point-offset-checkpoint failed.
> at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:66)
> at 
> kafka.log.LogManager.kafka$log$

[jira] [Resolved] (KAFKA-1636) High CPU in very active environment

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1636.
--
Resolution: Won't Fix

ConsumerIterator waits for the data from the underlying stream. Pl reopen if 
you think the issue still exists


> High CPU in very active environment
> ---
>
> Key: KAFKA-1636
> URL: https://issues.apache.org/jira/browse/KAFKA-1636
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: Redhat 64 bit
>  2.6.32-431.23.3.el6.x86_64 #1 SMP Wed Jul 16 06:12:23 EDT 2014 x86_64 x86_64 
> x86_64 GNU/Linux
>Reporter: Laurie Turner
>Assignee: Neha Narkhede
>
> Found the same issue on StackOverFlow below:
> http://stackoverflow.com/questions/22983435/kafka-consumer-threads-are-in-waiting-state-and-lag-is-getting-increased
> This is a very busy environment and the majority of the CPU seems to be busy 
> in the in the await method. 
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:237(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2093(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478(Compiled
>  Code))
> 4XESTACKTRACEat 
> kafka/consumer/ConsumerIterator.makeNext(ConsumerIterator.scala:65(Compiled 
> Code))
> 4XESTACKTRACEat 
> kafka/consumer/ConsumerIterator.makeNext(ConsumerIterator.scala:33(Compiled 
> Code))
> 4XESTACKTRACEat 
> kafka/utils/IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:61(Compiled
>  Code))
> 4XESTACKTRACEat 
> kafka/utils/IteratorTemplate.hasNext(IteratorTemplate.scala:53(Compiled Code))



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2173) Kafka died after throw more error

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2173.
--
Resolution: Cannot Reproduce

 Might have fixed in latest versions. Pl reopen if you think the issue still 
exists


> Kafka died after throw more error
> -
>
> Key: KAFKA-2173
> URL: https://issues.apache.org/jira/browse/KAFKA-2173
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
> Environment: VPS Server CentOs 6.6 4G Ram
>Reporter: Truyet Nguyen
>
> Kafka is died after server.log throw more error: 
> [2015-05-05 16:08:34,616] ERROR Closing socket for /127.0.0.1 because of 
> error (kafka.network.Processor)
> java.io.IOException: Broken pipe
>   at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>   at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>   at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470)
>   at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
>   at kafka.network.MultiSend.writeTo(Transmission.scala:101)
>   at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
>   at kafka.network.Processor.write(SocketServer.scala:472)
>   at kafka.network.Processor.run(SocketServer.scala:342)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1675) bootstrapping tidy-up

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1675.
--
Resolution: Fixed

 gradlew, gradlew.bat scripts are removed from repo. Pl reopen if you think the 
issue still exists


> bootstrapping tidy-up
> -
>
> Key: KAFKA-1675
> URL: https://issues.apache.org/jira/browse/KAFKA-1675
> Project: Kafka
>  Issue Type: Bug
>Reporter: Szczepan Faber
>Assignee: Ivan Lyutov
> Attachments: KAFKA-1675.patch
>
>
> I'd like to suggest following changes:
> 1. remove the 'gradlew' and 'gradlew.bat' scripts from the source tree. Those 
> scripts don't work, e.g. they fail with exception when invoked. I just got a 
> user report where those scripts were invoked by the user and it led to an 
> exception that was not easy to grasp. Bootstrapping step will generate those 
> files anyway.
> 2. move the 'gradleVersion' extra property from the 'build.gradle' into 
> 'gradle.properties'. Otherwise it is hard to automate the bootstrapping 
> process - in order to find out the gradle version, I need to evaluate the 
> build script, and for that I need gradle with correct version (kind of a 
> vicious circle). Project properties declared in the gradle.properties file 
> can be accessed exactly the same as the 'ext' properties, for example: 
> 'project.gradleVersion'.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1980.
--
Resolution: Fixed

Closing as per comments.  Pl reopen if you think the issue still exists


> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3570) It'd be nice to line up display output in columns in ConsumerGroupCommand

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3570.
--
Resolution: Fixed

> It'd be nice to line up display output in columns in ConsumerGroupCommand
> -
>
> Key: KAFKA-3570
> URL: https://issues.apache.org/jira/browse/KAFKA-3570
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Greg Zoller
>Priority: Trivial
>
> Not a huge deal but the output for ConsumerGroupCommand is pretty messy.  
> It'd be cool to line up the columns on the display output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4297.
--
Resolution: Duplicate

Closing this as there is a latest PR for KAFKA-4931.

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Assignee: Tom Bentley
>Priority: Critical
>  Labels: easyfix
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> :/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/argparse4j-0.5.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-api-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-file-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-json-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/connect-runtime-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/guava-18.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-core-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-databind-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javassist-3.18.2-GA.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-client-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-common-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-guava-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jersey-server-2.22.2.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/jopt-simple-4.9.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10.0.1-sources.jar:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../libs/kafka_2.11-0.10

[jira] [Resolved] (KAFKA-4389) kafka-server.stop.sh not work

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4389.
--
Resolution: Duplicate

> kafka-server.stop.sh not work
> -
>
> Key: KAFKA-4389
> URL: https://issues.apache.org/jira/browse/KAFKA-4389
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.0
> Environment: centos7
>Reporter: JianwenSun
>
> Ths proc/pid/cmdline is 4096 bytes limit, so ps ax | grep 'kafka/.kafka' do 
> not work.  I also don't want to use jsp.  Any other ways? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-30 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16147595#comment-16147595
 ] 

Manikumar commented on KAFKA-1980:
--

[~ndimiduk] Apologies for not giving a proper explanation. Problematic code 
exists in ReplayLogProducer. ReplayLogProducer rarely used tool and it uses 
deprecated older consumer API. This tool may get deprecated or may get updated 
to new API in KAFKA-5523.

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reopened KAFKA-1980:
--

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1980.
--
Resolution: Won't Fix

[~ndimiduk] Agree. Updated the JIRA.

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-888) problems when shutting down the java consumer .

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-888.
-
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> problems when shutting down the java consumer .
> ---
>
> Key: KAFKA-888
> URL: https://issues.apache.org/jira/browse/KAFKA-888
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: Linux kacper-pc 3.2.0-40-generic #64-Ubuntu SMP 2013 
> x86_64 x86_64 x86_64 GNU/Linux, scala 2.9.2 
>Reporter: kacper chwialkowski
>Assignee: Neha Narkhede
>Priority: Minor
>  Labels: bug, consumer, exception
>
> I got the following error when shutting down the consumer :
> ConsumerFetcherThread-test-consumer-group_kacper-pc-1367268338957-12cb5a0b-0-0]
>  INFO  kafka.consumer.SimpleConsumer - Reconnect due to socket error: 
> java.nio.channels.ClosedByInterruptException: null
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>  ~[na:1.7.0_21]
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:386) 
> ~[na:1.7.0_21]
>   at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:220) 
> ~[na:1.7.0_21]
>   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
> ~[na:1.7.0_21]
>   at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) 
> ~[na:1.7.0_21]
>   at kafka.utils.Utils$.read(Utils.scala:394) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>  ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>  ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:75) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
> and this is how I create my Consumer 
>   public Boolean call() throws Exception {
> Map topicCountMap = new HashMap<>();
> topicCountMap.put(topic, new Integer(1));
> Map>> consumerMap = 
> consumer.createMessageStreams(topicCountMap);
> KafkaStream stream = 
> consumerMap.get(topic).get(0);
> ConsumerIterator it = stream.iterator();
> it.next();
> LOGGER.info("Received the message. Shutting down");
> consumer.commitOffsets();
> consumer.shutdown();
> return true;
> }



--
This message was sent by Atlassian JIRA
(v

[jira] [Resolved] (KAFKA-1463) producer fails with scala.tuple error

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1463.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> producer fails with scala.tuple error
> -
>
> Key: KAFKA-1463
> URL: https://issues.apache.org/jira/browse/KAFKA-1463
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: java, springsource
>Reporter: Joe
>Assignee: Jun Rao
>
> Running on a windows machine trying to debug first kafka program. The program 
> fails on the following line:
> producer = new kafka.javaapi.producer.Producer(
>   new ProducerConfig(props)); 
> ERROR:
> Exception in thread "main" java.lang.VerifyError: class scala.Tuple2$mcLL$sp 
> overrides final method _1.()Ljava/lang/Object;
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:791)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)...
> unable to find solution online.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1632) No such method error on KafkaStream.head

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1632.
--
Resolution: Cannot Reproduce

 Mostly related to Kafka version mismatch. Pl reopen if you think the issue 
still exists


> No such method error on KafkaStream.head
> 
>
> Key: KAFKA-1632
> URL: https://issues.apache.org/jira/browse/KAFKA-1632
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: aarti gupta
>
> \The following error is thrown, (when I call KafkaStream.head(), as shown in 
> the code snippet below)
>  WARN -  java.lang.NoSuchMethodError: 
> kafka.consumer.KafkaStream.head()Lkafka/message/MessageAndMetadata;
> My use case, is that I want to block on the receive() method, and when a 
> message is published on the topic, I 'return the head' of the queue to the 
> calling method, that processes it.
> I do not use partitioning and have a single stream.
> import com.google.common.collect.Maps;
> import x.x.x.Task;
> import kafka.consumer.ConsumerConfig;
> import kafka.consumer.KafkaStream;
> import kafka.javaapi.consumer.ConsumerConnector;
> import kafka.javaapi.consumer.ZookeeperConsumerConnector;
> import kafka.message.MessageAndMetadata;
> import org.codehaus.jettison.json.JSONException;
> import org.codehaus.jettison.json.JSONObject;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import java.io.IOException;
> import java.util.List;
> import java.util.Map;
> import java.util.Properties;
> import java.util.concurrent.Callable;
> import java.util.concurrent.Executors;
> /**
>  * @author agupta
>  */
> public class KafkaConsumerDelegate implements ConsumerDelegate {
> private ConsumerConnector consumerConnector;
> private String topicName;
> private static Logger LOG = 
> LoggerFactory.getLogger(KafkaProducerDelegate.class.getName());
> private final Map topicCount = Maps.newHashMap();
> private Map>> messageStreams;
> private List> kafkaStreams;
> @Override
> public Task receive(final boolean consumerConfirms) {
> try {
> LOG.info("Kafka consumer delegate listening on topic " + 
> getTopicName());
> kafkaStreams = messageStreams.get(getTopicName());
> final KafkaStream kafkaStream = 
> kafkaStreams.get(0);
> return Executors.newSingleThreadExecutor().submit(new 
> Callable() {
> @Override
> public Task call() throws Exception {
>  final MessageAndMetadata 
> messageAndMetadata= kafkaStream.head();
> final Task message = new Task() {
> @Override
> public byte[] getBytes() {
> return messageAndMetadata.message();
> }
> };
> return message;
> }
> }).get();
> } catch (Exception e) {
> LOG.warn("Error in consumer " + e.getMessage());
> }
> return null;
> }
> @Override
> public void initialize(JSONObject configData, boolean publisherAckMode) 
> throws IOException {
> try {
> this.topicName = configData.getString("topicName");
> LOG.info("Topic name is " + topicName);
> } catch (JSONException e) {
> e.printStackTrace();
> LOG.error("Error parsing configuration", e);
> }
> Properties properties = new Properties();
> properties.put("zookeeper.connect", "localhost:2181");
> properties.put("group.id", "testgroup");
> ConsumerConfig consumerConfig = new ConsumerConfig(properties);
> //only one stream, and one topic, (Since we are not supporting 
> partitioning)
> topicCount.put(getTopicName(), 1);
> consumerConnector = new ZookeeperConsumerConnector(consumerConfig);
> messageStreams = consumerConnector.createMessageStreams(topicCount);
> }
> @Override
> public void stop() throws IOException {
> //TODO
> throw new UnsupportedOperationException("Method Not Implemented");
> }
> public String getTopicName() {
> return this.topicName;
> }
> }
> Lastly, I am using the following binary 
> kafka_2.8.0-0.8.1.1  
> and the following maven dependency
>   
> org.apache.kafka
> kafka_2.10
> 0.8.1.1
> 
> Any suggestions?
> Thanks
> aarti



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1492) Getting error when sending producer request at the broker end with a single broker

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1492.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> Getting error when sending producer request at the broker end with a single 
> broker
> --
>
> Key: KAFKA-1492
> URL: https://issues.apache.org/jira/browse/KAFKA-1492
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
>Reporter: sriram
>Assignee: Jun Rao
>
> Tried to run a simple example by sending a message to a single broker . 
> Getting error 
> [2014-06-13 08:35:45,402] INFO Closing socket connection to /127.0.0.1. 
> (kafka.network.Processor)
> [2014-06-13 08:35:45,440] WARN [KafkaApi-1] Produce request with correlation 
> id 2 from client  on partition [samsung,0] failed due to Leader not local for 
> partition [samsung,0] on broker 1 (kafka.server.KafkaApis)
> [2014-06-13 08:35:45,440] INFO [KafkaApi-1] Send the close connection 
> response due to error handling produce request [clientId = , correlationId = 
> 2, topicAndPartition = [samsung,0]] with Ack=0 (kafka.server.KafkaApis)
> OS- Windows 7 , JDK 1.7 , Scala 2.10



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-270) sync producer / consumer test producing lot of kafka server exceptions & not getting the throughput mentioned here http://incubator.apache.org/kafka/performance.html

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-270.
-
Resolution: Won't Fix

Closing due to inactivity. Pl reopen if you think the issue still exists


>  sync producer / consumer test producing lot of kafka server exceptions & not 
> getting the throughput mentioned here 
> http://incubator.apache.org/kafka/performance.html
> --
>
> Key: KAFKA-270
> URL: https://issues.apache.org/jira/browse/KAFKA-270
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.7
> Environment: Linux 2.6.18-238.1.1.el5 , x86_64 x86_64 x86_64 
> GNU/Linux 
> ext3 file system with raid10
>Reporter: Praveen Ramachandra
>  Labels: clients, core, newdev, performance
>
> I am getting ridiculously low producer and consumer throughput.
> I am using default config values for producer, consumer and broker
> which are very good starting points, as they should yield sufficient
> throughput.
> Appreciate if you point what settings/changes-in-code needs to be done
> to get higher throughput.
> I changed num.partitions in the server.config to 10.
> Please look below for exception and error messages from the server
> BTW: I am running server, zookeeper, producer, consumer on the same host.
> Consumer Code=
>long startTime = System.currentTimeMillis();
>long endTime = startTime + runDuration*1000l;
>Properties props = new Properties();
>props.put("zk.connect", "localhost:2181");
>props.put("groupid", subscriptionName); // to support multiple
> subscribers
>props.put("zk.sessiontimeout.ms", "400");
>props.put("zk.synctime.ms", "200");
>props.put("autocommit.interval.ms", "1000");
>consConfig =  new ConsumerConfig(props);
>consumer =
> kafka.consumer.Consumer.createJavaConsumerConnector(consConfig);
>Map topicCountMap = new HashMap();
>topicCountMap.put(topicName, new Integer(1)); // has the topic
> to which to subscribe to
>Map>> consumerMap =
> consumer.createMessageStreams(topicCountMap);
>KafkaMessageStream stream =  
> consumerMap.get(topicName).get(0);
>ConsumerIterator it = stream.iterator();
>while(System.currentTimeMillis() <= endTime )
>{
>it.next(); // discard data
>consumeMsgCount.incrementAndGet();
>}
> End consumer CODE
> =Producer CODE
>props.put("serializer.class", "kafka.serializer.StringEncoder");
>props.put("zk.connect", "localhost:2181");
>// Use random partitioner. Don't need the key type. Just
> set it to Integer.
>// The message is of type String.
>producer = new kafka.javaapi.producer.Producer String>(new ProducerConfig(props));
>long endTime = startTime + runDuration*1000l; // run duration
> is in seconds
>while(System.currentTimeMillis() <= endTime )
>{
>String msg =
> org.apache.commons.lang.RandomStringUtils.random(msgSizes.get(0));
>producer.send(new ProducerData(topicName, msg));
>pc.incrementAndGet();
>}
>java.util.Date date = new java.util.Date(System.currentTimeMillis());
>System.out.println(date+" :: stopped producer for topic"+topicName);
> =END Producer CODE
> I see a bunch of exceptions like this
> [2012-02-11 02:44:11,945] ERROR Closing socket for /188.125.88.145 because of 
> error (kafka.network.Processor)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>   at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:405)
>   at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:506)
>   at kafka.message.FileMessageSet.writeTo(FileMessageSet.scala:107)
>   at kafka.server.MessageSetSend.writeTo(MessageSetSend.scala:51)
>   at kafka.network.MultiSend.writeTo(Transmission.scala:95)
>   at kafka.network.Processor.write(SocketServer.scala:332)
>   at kafka.network.Processor.run(SocketServer.scala:209)
>   at java.lang.Thread.run(Thread.java:662)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcher.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:171)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
>   at kafka.utils.Utils$.read(Utils.scala:485)
>   at 
> kafka.network.

[jira] [Resolved] (KAFKA-4520) Kafka broker fails with not so user-friendly error msg when log.dirs is not set

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4520.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

> Kafka broker fails with not so user-friendly error msg when log.dirs is not 
> set
> ---
>
> Key: KAFKA-4520
> URL: https://issues.apache.org/jira/browse/KAFKA-4520
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Buchi Reddy B
>Priority: Trivial
> Fix For: 0.11.0.0
>
>
> I tried to bring up a Kafka broker without setting log.dirs property and it 
> has failed with the following error.
> {code:java}
> [2016-12-07 23:41:08,020] INFO KafkaConfig values:
>  advertised.host.name = 100.96.7.10
>  advertised.listeners = null
>  advertised.port = null
>  authorizer.class.name =
>  auto.create.topics.enable = true
>  auto.leader.rebalance.enable = true
>  background.threads = 10
>  broker.id = 0
>  broker.id.generation.enable = false
>  broker.rack = null
>  compression.type = producer
>  connections.max.idle.ms = 60
>  controlled.shutdown.enable = true
>  controlled.shutdown.max.retries = 3
>  controlled.shutdown.retry.backoff.ms = 5000
>  controller.socket.timeout.ms = 3
>  default.replication.factor = 1
>  delete.topic.enable = false
>  fetch.purgatory.purge.interval.requests = 1000
>  group.max.session.timeout.ms = 30
>  group.min.session.timeout.ms = 6000
>  host.name =
>  inter.broker.protocol.version = 0.10.1-IV2
>  leader.imbalance.check.interval.seconds = 300
>  leader.imbalance.per.broker.percentage = 10
>  listeners = PLAINTEXT://0.0.0.0:9092
>  log.cleaner.backoff.ms = 15000
>  log.cleaner.dedupe.buffer.size = 134217728
>  log.cleaner.delete.retention.ms = 8640
>  log.cleaner.enable = true
>  log.cleaner.io.buffer.load.factor = 0.9
>  log.cleaner.io.buffer.size = 524288
>  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>  log.cleaner.min.cleanable.ratio = 0.5
>  log.cleaner.min.compaction.lag.ms = 0
>  log.cleaner.threads = 1
>  log.cleanup.policy = [delete]
>  log.dir = /tmp/kafka-logs
>  log.dirs =
>  log.flush.interval.messages = 9223372036854775807
>  log.flush.interval.ms = null
>  log.flush.offset.checkpoint.interval.ms = 6
>  log.flush.scheduler.interval.ms = 9223372036854775807
>  log.index.interval.bytes = 4096
>  log.index.size.max.bytes = 10485760
>  log.message.format.version = 0.10.1-IV2
>  log.message.timestamp.difference.max.ms = 9223372036854775807
>  log.message.timestamp.type = CreateTime
>  log.preallocate = false
>  log.retention.bytes = -1
>  log.retention.check.interval.ms = 30
>  log.retention.hours = 168
>  log.retention.minutes = null
>  log.retention.ms = null
>  log.roll.hours = 168
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  max.connections.per.ip = 2147483647
>  max.connections.per.ip.overrides =
>  message.max.bytes = 112
>  metric.reporters = []
>  metrics.num.samples = 2
>  metrics.sample.window.ms = 3
>  min.insync.replicas = 1
>  num.io.threads = 8
>  num.network.threads = 3
>  num.partitions = 1
>  num.recovery.threads.per.data.dir = 1
>  num.replica.fetchers = 1
>  offset.metadata.max.bytes = 4096
>  offsets.commit.required.acks = -1
>  offsets.commit.timeout.ms = 5000
>  offsets.load.buffer.size = 5242880
>  offsets.retention.check.interval.ms = 60
>  offsets.retention.minutes = 1440
>  offsets.topic.compression.codec = 0
>  offsets.topic.num.partitions = 50
>  offsets.topic.replication.factor = 3
>  offsets.topic.segment.bytes = 104857600
>  port = 9092
>  principal.builder.class = class 
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>  producer.purgatory.purge.interval.requests = 1000
>  queued.max.requests = 500
>  quota.consumer.default = 9223372036854775807
>  quota.producer.default = 9223372036854775807
>  quota.window.num = 11
>  quota.window.size.seconds = 1
>  replica.fetch.backoff.ms = 1000
>  replica.fetch.max.bytes = 1048576
>  replica.fetch.min.bytes = 1
>  replica.fetch.response.max.bytes = 10485760
>  replica.fetch.wait.max.ms = 500
>  replica.high.watermark.checkpoint.interval.ms = 5000
>  replica.lag.time.max.ms = 1
>  replica.socket.receive.buffer.bytes = 65536
>  replica.socket.timeout.ms = 3
>  replication.quota.window.num = 11
>  replication.quota.window.si

[jira] [Resolved] (KAFKA-1347) Create a system test for network partitions

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1347.
--
Resolution: Duplicate

This is being fixed in KAFKA-5476.

> Create a system test for network partitions
> ---
>
> Key: KAFKA-1347
> URL: https://issues.apache.org/jira/browse/KAFKA-1347
> Project: Kafka
>  Issue Type: Test
>Reporter: Jay Kreps
>
> We got some free and rather public QA here:
> http://aphyr.com/posts/293-call-me-maybe-kafka
> We have since added a configuration to disable unclean leader election which 
> allows you to prefer consistency over availability when all brokers fail.
> This has some unit tests, but ultimately there is no reason to believe this 
> works unless we have a fairly aggressive system test case for it.
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests
> It would be good to add support for network partitions. I don't think we 
> actually need to try to use the jepsen stuff directly, we can just us the 
> underlying tools it uses--iptables and tc. These are linux specific, but that 
> is prolly okay. You can see these at work here:
> https://github.com/aphyr/jepsen/blob/master/src/jepsen/control/net.clj
> Having this would help provide better evidence that this works now, and would 
> keep it working in the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-818) Request failure metrics on the Kafka brokers is not instrumented for all requests

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-818.
-
Resolution: Duplicate

Similar metrics are proposed in KAFKA-5746.

> Request failure metrics on the Kafka brokers is not instrumented for all 
> requests
> -
>
> Key: KAFKA-818
> URL: https://issues.apache.org/jira/browse/KAFKA-818
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Neha Narkhede
>
> We have a metric for failed produce requests, a metric for failed fetch 
> requests, but no metrics for failed offset requests and failed topic metadata 
> requests. It will be good to have consistent metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1447) Controlled shutdown deadlock when trying to send state updates

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1447.
--
Resolution: Fixed

 Pl reopen if you think the issue still exists


> Controlled shutdown deadlock when trying to send state updates
> --
>
> Key: KAFKA-1447
> URL: https://issues.apache.org/jira/browse/KAFKA-1447
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0
>Reporter: Sam Meder
>Priority: Critical
>  Labels: newbie++
>
> We're seeing controlled shutdown indefinitely stuck on trying to send out 
> state change messages to the other brokers:
> [2014-05-03 04:01:30,580] INFO [Socket Server on Broker 4], Shutdown 
> completed (kafka.network.SocketServer)
> [2014-05-03 04:01:30,581] INFO [Kafka Request Handler on Broker 4], shutting 
> down (kafka.server.KafkaRequestHandlerPool)
> and stuck on:
> "kafka-request-handler-12" daemon prio=10 tid=0x7f1f04a66800 nid=0x6e79 
> waiting on condition [0x7f1ad5767000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for <0x00078e91dc20> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:349)
> at 
> kafka.controller.ControllerChannelManager.sendRequest(ControllerChannelManager.scala:57)
> locked <0x00078e91dc38> (a java.lang.Object)
> at kafka.controller.KafkaController.sendRequest(KafkaController.scala:655)
> at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:298)
> at 
> kafkler.ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2.apply(ControllerChannelManager.scala:290)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at scala.collection.Iterator$class.foreach(Iterator.scala:772)
> at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
> at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
> at 
> kafka.controller.ControllerBrokerRequestBatch.sendRequestsToBrokers(ControllerChannelManager.scala:290)
> at 
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:97)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:269)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$3.apply(KafkaController.scala:253)
> at scala.Option.foreach(Option.scala:197)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply$mcV$sp(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3$$anonfun$apply$1.apply(KafkaController.scala:253)
> at kafka.utils.Utils$.inLock(Utils.scala:538)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:252)
> at 
> kafka.controller.KafkaController$$anonfun$shutdownBroker$3.apply(KafkaController.scala:249)
> at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:130)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:275)
> at kafka.controller.KafkaController.shutdownBroker(KafkaController.scala:249)
> locked <0x00078b495af0> (a java.lang.Object)
> at kafka.server.KafkaApis.handleControlledShutdownRequest(KafkaApis.scala:264)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:192)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
> at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1402) Create unit test helper that stops and starts a cluster

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1402.
--
Resolution: Fixed

We have KafkaServerTestHarness, EmbeddedKafkaCluster, EmbeddedZookeeper helper 
classes.

> Create unit test helper that stops and starts a cluster
> ---
>
> Key: KAFKA-1402
> URL: https://issues.apache.org/jira/browse/KAFKA-1402
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>
> We have a zookeeper test harness. We used to have a kafka server test 
> harness, but it looks like it has been deleted.
> Each test that wants to manage a server starts and stops it itself. This is a 
> little problematic as it may not do a good job of cleaning up and there are 
> lots of details to get right.
> We should have an EmbeddedKafkaCluster class like the EmbeddedZookeeper we 
> have that starts N brokers and convert the existing full server classes to 
> use this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1920) Add a metric to count client side errors in BrokerTopicMetrics

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1920.
--
Resolution: Duplicate

Similar metrics are proposed in  KAFKA-5746.

> Add a metric to count client side errors in BrokerTopicMetrics
> --
>
> Key: KAFKA-1920
> URL: https://issues.apache.org/jira/browse/KAFKA-1920
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>
> Currently the BrokerTopicMetrics count only "failures" across all topics and 
> for individual topics. Should we consider adding a metric to count the number 
> of client side errors?
> This essentially counts the number of bad requests per topic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1138) Remote producer uses the hostname defined in broker

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1138.
--
Resolution: Fixed

> Remote producer uses the hostname defined in broker
> ---
>
> Key: KAFKA-1138
> URL: https://issues.apache.org/jira/browse/KAFKA-1138
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
>Reporter: Hyun-Gul Roh
>Assignee: Jun Rao
>
> When the producer API in the node which is not the broker sends message to a 
> broker, only TopicMetadataRequest is sent, but ProducerRequest is not by 
> observing the log of "kafka-request.log"
> According to my analysis, when the producer api sends ProducerRequest, it 
> seems to use the hostname defined in the broker. So, if the hostname is not 
> the one registered in DNS, the producer cannot send the ProducerRequest. 
> I am attaching the log:
> [2013-11-21 15:28:49,464] ERROR Failed to collate messages by topic, 
> partition due to: fetching topic metadata for topics [Set(test)] from broker 
> [ArrayBuffer(id:0,host:111.111.111.111,port:9092)] failed 
> (kafka.producer.async.DefaultEventHandler)
> [2013-11-21 15:28:49,465] INFO Back off for 100 ms before retrying send. 
> Remaining retries = 1 (kafka.producer.async.DefaultEventHandler)
> [2013-11-21 15:28:49,566] INFO Fetching metadata from broker 
> id:0,host:111.111.111.111,port:9092 with correlation id 6 for 1 topic(s) 
> Set(test) (kafka.client.ClientUtils$)
> [2013-11-21 15:28:49,819] ERROR Producer connection to 111.111.111.111:9092 
> unsuccessful (kafka.producer.SyncProducer)
> java.net.ConnectException: 연결이 거부됨
>   at sun.nio.ch.Net.connect0(Native Method)
>   at sun.nio.ch.Net.connect(Net.java:465)
>   at sun.nio.ch.Net.connect(Net.java:457)
>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:639)
>   at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
>   at kafka.producer.SyncProducer.connect(SyncProducer.scala:146)
>   at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161)
>   at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
>   at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
>   at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
>   at kafka.utils.Utils$.swallow(Utils.scala:186)
>   at kafka.utils.Logging$class.swallowError(Logging.scala:105)
>   at kafka.utils.Utils$.swallowError(Utils.scala:45)
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
>   at 
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
>   at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
>   at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
>   at scala.collection.immutable.Stream.foreach(Stream.scala:254)
>   at 
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
>   at 
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> [2013-11-21 15:28:49,821] WARN Fetching topic metadata with correlation id 6 
> for topics [Set(test)] from broker [id:0,host:111.111.111.111,port:9092] 
> failed (kafka.client.ClientUtils$)
> java.net.ConnectException: 연결이 거부됨
>   at sun.nio.ch.Net.connect0(Native Method)
>   at sun.nio.ch.Net.connect(Net.java:465)
>   at sun.nio.ch.Net.connect(Net.java:457)
>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:639)
>   at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
>   at kafka.producer.SyncProducer.connect(SyncProducer.scala:146)
>   at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161)
>   at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
>   at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
>   at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
>   at kafka.utils.Utils$.swallow(Utils.scala:186)
>   at kafka.utils.Logging$class.swallowError(Logging.scala:105)
>   at kafka.utils.Utils$.swallowError(Utils.scala:45)
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
>   at 

[jira] [Resolved] (KAFKA-908) Write duplicate messages during broker failure

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-908.
-
Resolution: Fixed

Closing inactive issue. Pl reopen if you think the issue still exists

> Write duplicate messages during broker failure
> --
>
> Key: KAFKA-908
> URL: https://issues.apache.org/jira/browse/KAFKA-908
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Tudor Scurtu
>Assignee: Neha Narkhede
>Priority: Minor
>
> Reproduction steps:
> 1. Start a multi-broker quorum (e.g. 3 brokers)
> 2. Create a multi-replica topic (e.g. 3 replicas)
> 3. Start an async performance producer with a fixed number of messages to 
> produce
> 4. Force kill a partition leader broker using SIGKILL (no clean shutdown) - 
> make sure you kill it during actual writes
> 5. Wait for the producer to stop
> 6. Read from the topic from the beginning - there will a small amount of 
> duplicate messages
> Reproduction rate: sometimes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2028) Unable to start the ZK instance after myid file was missing and had to recreate it.

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2028.
--
Resolution: Fixed

related to config issue.  Pl reopen if you think the issue still exists


> Unable to start the ZK instance after myid file was missing and had to 
> recreate it.
> ---
>
> Key: KAFKA-2028
> URL: https://issues.apache.org/jira/browse/KAFKA-2028
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.1.1
> Environment: Non Prod
>Reporter: InduR
>
> Created a Dev 3 node cluster environment in Jan and the environment has been 
> up and running without any issues until few days.
>  Kafka server stopped running but ZK listener was up .Noticed that the Myid 
> file was missing in all 3 servers.
> Recreated the file when ZK was still running did not help.
> Stopped all of the ZK /kafka server instances and see the following error 
> when starting ZK.
> kafka_2.10-0.8.1.1
> OS : RHEL
> [root@lablx0025 bin]# ./zookeeper-server-start.sh 
> ../config/zookeeper.properties &
> [1] 31053
> [* bin]# [2015-03-17 15:04:33,876] INFO Reading configuration from: 
> ../config/zookeeper.properties (org.apache.zookeeper. 
>   
> server.quorum.QuorumPeerConfig)
> [2015-03-17 15:04:33,885] INFO Defaulting to majority quorums 
> (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
> [2015-03-17 15:04:33,911] DEBUG preRegister called. 
> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@4891d863, 
> name=log4j:logger=kafka (k
>afka)
> [2015-03-17 15:04:33,915] INFO Starting quorum peer 
> (org.apache.zookeeper.server.quorum.QuorumPeerMain)
> [2015-03-17 15:04:33,940] INFO binding to port 0.0.0.0/0.0.0.0:2181 
> (org.apache.zookeeper.server.NIOServerCnxn)
> [2015-03-17 15:04:33,966] INFO tickTime set to 3000 
> (org.apache.zookeeper.server.quorum.QuorumPeer)
> [2015-03-17 15:04:33,966] INFO minSessionTimeout set to -1 
> (org.apache.zookeeper.server.quorum.QuorumPeer)
> [2015-03-17 15:04:33,966] INFO maxSessionTimeout set to -1 
> (org.apache.zookeeper.server.quorum.QuorumPeer)
> [2015-03-17 15:04:33,966] INFO initLimit set to 5 
> (org.apache.zookeeper.server.quorum.QuorumPeer)
> [2015-03-17 15:04:34,023] ERROR Failed to increment parent cversion for: 
> /consumers/console-consumer-6249/offsets/test (org.apache.zoo 
>   
> keeper.server.persistence.FileTxnSnapLog)
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /consumers/console-consumer-6249/offsets/test
> at 
> org.apache.zookeeper.server.DataTree.incrementCversion(DataTree.java:1218)
> at 
> org.apache.zookeeper.server.persistence.FileTxnSnapLog.processTransaction(FileTxnSnapLog.java:222)
> at 
> org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:150)
> at 
> org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:222)
> at 
> org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:398)
> at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:143)
> at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:103)
> at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
> [2015-03-17 15:04:34,027] FATAL Unable to load database on disk 
> (org.apache.zookeeper.server.quorum.QuorumPeer)
> java.io.IOException: Failed to process transaction type: 2 error: 
> KeeperErrorCode = NoNode for /consumers/console-consumer-6249/offset  
>   
>s/test
> at 
> org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:152)
> at 
> org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:222)
> at 
> org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:398)
> at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:143)
> at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:103)
> at 
> org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
> [2015-03-17 15:04:34,027] FATAL Unexpected exception, exiting abnormally 
> (org.apache.zookeeper.server.quorum.QuorumPeerMain)
> java.lang.RuntimeException: Unable to run quorum server
> at 
> org.apache.zookeeper.

[jira] [Resolved] (KAFKA-2343) Clarify KafkaConsumer.poll rebalance behavior

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2343.
--
Resolution: Fixed

Javadocs updated in newer versions.

> Clarify KafkaConsumer.poll rebalance behavior
> -
>
> Key: KAFKA-2343
> URL: https://issues.apache.org/jira/browse/KAFKA-2343
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>
> The current javadoc for KafkaConsumer.poll says the following:
> {code}
>  * The offset used for fetching the data is governed by whether or not 
> {@link #seek(TopicPartition, long)} is used.
>  * If {@link #seek(TopicPartition, long)} is used, it will use the 
> specified offsets on startup and on every
>  * rebalance, to consume data from that offset sequentially on every 
> poll. If not, it will use the last checkpointed
>  * offset using {@link #commit(Map, CommitType) commit(offsets, sync)} 
> for the subscribed list of partitions.
> {code}
> Unless I am misreading, this suggests that rebalance should reset to the 
> seeked position (if one was set). The consumer definitely doesn't do this 
> currently, so we should either fix the javadoc if that is not the desired 
> behavior or fix the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2750) Sender.java: handleProduceResponse does not check protocol version

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2750.
--
Resolution: Fixed

This was fixed in KAFKA-4462 /KIP-97 for newer clients

> Sender.java: handleProduceResponse does not check protocol version
> --
>
> Key: KAFKA-2750
> URL: https://issues.apache.org/jira/browse/KAFKA-2750
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Geoff Anderson
>
> If you try run an 0.9 producer against 0.8.2.2 kafka broker, you get a fairly 
> cryptic error message:
> [2015-11-04 18:55:43,583] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'throttle_time_ms': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:462)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:279)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:141)
> Although we shouldn't expect an 0.9 producer to work against an 0.8.X broker 
> since the protocol version has been increased, perhaps the error could be 
> clearer.
> The cause seems to be that in Sender.java, handleProduceResponse does not to 
> have any mechanism for checking the protocol version of the received produce 
> response - it just calls a constructor which blindly tries to grab the 
> throttle time field which in this case fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2842) BrokerEndPoint regex does't support hostname with _

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2842.
--
Resolution: Fixed

This was fixed in KAFKA-3719

> BrokerEndPoint regex does't support hostname with _
> ---
>
> Key: KAFKA-2842
> URL: https://issues.apache.org/jira/browse/KAFKA-2842
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sachin Pasalkar
>Priority: Minor
>
> If you look at the code of BrokerEndPoint.scala, it has regex uriParseExp. 
> This regex is used for validation of brokers. However, it fails to validate 
> hostname with _ in it. e.g. adfs_212:9092



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2897) Class NIOServerCnxn$Factory not found due to mismatch in dependencies

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2897.
--
Resolution: Won't Fix

Closing inactive issue. Pl reopen if you think the issue still exists

> Class NIOServerCnxn$Factory not found due to mismatch in dependencies
> -
>
> Key: KAFKA-2897
> URL: https://issues.apache.org/jira/browse/KAFKA-2897
> Project: Kafka
>  Issue Type: Bug
>Reporter: Yuval Steinberg
>Priority: Trivial
>
> It seems that there is a mismatch between zookeeper & zkclient in kafka 
> pom.xml causes this error upon initialization:
> The pom file (e.g. of kafka_2.10:0.8.2.1) requires zookeeper:3.4.6 & 
> zkclient:0.3, However:
> * While zookeeper:3.3.x had a class NIOServerCnxn$Factory, in 3.4.x it became 
> an independent class (NIOServerCnxnFactory)
> * ZkServer class of zkclient:0.3 is still using (importing) 
> NIOServerCnxn.Factory and only in 0.5 changes to use NIOServerCnxnFactory.
> It seems that the version of zkclient in the pom file should be updated



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2986) Consumer group doesn't lend itself well for slow consumers with varying message size

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2986.
--
Resolution: Fixed

Fixed in newer versions.  Pl reopen if you think the issue still exists


> Consumer group doesn't lend itself well for slow consumers with varying 
> message size
> 
>
> Key: KAFKA-2986
> URL: https://issues.apache.org/jira/browse/KAFKA-2986
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Java consumer API 0.9.0.0
>Reporter: Jens Rantil
>Assignee: Neha Narkhede
>
> I sent a related post to the Kafka mailing list, but haven't received any 
> response: 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201512.mbox/%3CCAL%2BArfWNfkpymkNDuf6UJ06CJJ63XC1bPHeT4TSYXKjSsOpu-Q%40mail.gmail.com%3E
>  So far, I think this is a design issue in Kafka so I'm taking the liberty of 
> creating an issue.
> *Use case:*
>  - Slow consumtion. Maybe around 20 seconds per record.
>  - Large variation in message size: Serialized tasks are in the range of ~300 
> bytes up to ~3 MB.
>  - Consumtion latency (20 seconds) is independent of message size.
> *Code example:*
> {noformat}
> while (isRunning()) {
>   ConsumerRecords records = consumer.poll(100);
>   for (final ConsumerRecord record : records) {
> // Handle record...
>   }
> }
> {noformat}
> *Problem:* Kafka doesn't have any issues with large messages (as long as you 
> bump some configuration flags). However, the problem is two-fold:
> - KafkaConsumer#poll is the only call that sends healthchecks.
> - There is no limit as to how many messages KafkaConsumer#poll will return. 
> The limit is only set to the total number of bytes to be prefetched. This is 
> problematic for varying message sizes as the session timeout becomes 
> extremelly hard to tune:
> -- delay until next KafkaConsumer#poll call is proportional to the number of 
> records returned by previous KafkaConsumer#poll call.
> -- KafkaConsumer#poll will return many small records or just a few larger 
> records. For many small messages the risk is very large of the session 
> timeout to kick in. Raising the session timeout in the order of magnitudes 
> required to handle the smaller messages increases the latency until a dead 
> consumer is discovered a thousand fold.
> *Proposed fixes:* I do not claim to be a Kafka expert, but two ideas are to 
> either
>  - allow add `KafkaConsumer#healthy` call to let the broker know we are still 
> processing records; or
>  - add an upper number of message limit to `KafkaConsumer#poll`. I am 
> thinking of something like `KafkaConsumer#poll(timeout, nMaxMessages)`. This 
> could obviously be set a configuration property instead. To avoid the broker 
> having to look at the messages it sends, I suggest the KafkaConsumer decides 
> how many messages it returns from poll.
> *Workarounds:*
>  - Have different topics for different message sizes. Makes tuning of 
> partition prefetch easier.
>  - Use another tool :)
> *Questions:* Should Kafka be able to handle this case? Maybe I am using the 
> wrong tool for this and Kafka is simply designed for high-throughput/low 
> latency?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3079) org.apache.kafka.common.KafkaException: java.lang.SecurityException: Configuration Error:

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3079.
--
Resolution: Cannot Reproduce

Mostly related to config error.  Pl reopen if you think the issue still exists


> org.apache.kafka.common.KafkaException: java.lang.SecurityException: 
> Configuration Error:
> -
>
> Key: KAFKA-3079
> URL: https://issues.apache.org/jira/browse/KAFKA-3079
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
> Environment: RHEL 6
>Reporter: Mohit Anchlia
> Attachments: kafka_server_jaas.conf
>
>
> After enabling security I am seeing the following error even though JAAS file 
> has no mention of "Zookeeper". I used the following steps:
> http://docs.confluent.io/2.0.0/kafka/sasl.html
> [2016-01-07 19:05:15,329] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: java.lang.SecurityException: 
> Configuration Error:
> Line 8: expected [{], found [Zookeeper]
> at 
> org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:102)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:262)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.lang.SecurityException: Configuration Error:
> Line 8: expected [{], found [Zookeeper]
> at com.sun.security.auth.login.ConfigFile.(ConfigFile.java:110)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at java.lang.Class.newInstance(Class.java:374)
> at 
> javax.security.auth.login.Configuration$2.run(Configuration.java:258)
> at 
> javax.security.auth.login.Configuration$2.run(Configuration.java:250)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> javax.security.auth.login.Configuration.getConfiguration(Configuration.java:249)
> at 
> org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:99)
> ... 5 more
> Caused by: java.io.IOException: Configuration Error:



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3647) Unable to set a ssl provider

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3647.
--
Resolution: Fixed

Closing as per above comments.

> Unable to set a ssl provider
> 
>
> Key: KAFKA-3647
> URL: https://issues.apache.org/jira/browse/KAFKA-3647
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.1
> Environment: Centos, OracleJRE 8, Vagrant
>Reporter: Elvar
>Priority: Minor
>
> When defining a ssl provider Kafka does not start because the provider was 
> not found.
> {code}
> [2016-05-02 13:48:48,252] FATAL [Kafka Server 11], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.KafkaException: 
> java.security.NoSuchProviderException: no such provider: sun.security.ec.SunEC
> at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:44)
> {code}
> To test
> {code}
> /bin/kafka-server-start /etc/kafka/server.properties --override 
> ssl.provider=sun.security.ec.SunEC
> {code}
> This is stopping us from talking to Kafka with SSL from Go programs because 
> no common cipher suites are available.
> Using sslscan this is available from Kafka
> {code}
>  Supported Server Cipher(s):
>Accepted  TLSv1  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLSv1  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLSv1  128 bits  EDH-DSS-DES-CBC3-SHA
>Accepted  TLS11  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLS11  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLS11  128 bits  EDH-DSS-DES-CBC3-SHA
>Accepted  TLS12  256 bits  DHE-DSS-AES256-GCM-SHA384
>Accepted  TLS12  256 bits  DHE-DSS-AES256-SHA256
>Accepted  TLS12  256 bits  DHE-DSS-AES256-SHA
>Accepted  TLS12  128 bits  DHE-DSS-AES128-GCM-SHA256
>Accepted  TLS12  128 bits  DHE-DSS-AES128-SHA256
>Accepted  TLS12  128 bits  DHE-DSS-AES128-SHA
>Accepted  TLS12  128 bits  EDH-DSS-DES-CBC3-SHA
>  Preferred Server Cipher(s):
>SSLv2  0 bits(NONE)
>TLSv1  256 bits  DHE-DSS-AES256-SHA
>TLS11  256 bits  DHE-DSS-AES256-SHA
>TLS12  256 bits  DHE-DSS-AES256-GCM-SHA384
> {code}
> From the Golang documentation these are avilable there
> {code}
> TLS_RSA_WITH_RC4_128_SHAuint16 = 0x0005
> TLS_RSA_WITH_3DES_EDE_CBC_SHA   uint16 = 0x000a
> TLS_RSA_WITH_AES_128_CBC_SHAuint16 = 0x002f
> TLS_RSA_WITH_AES_256_CBC_SHAuint16 = 0x0035
> TLS_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x009c
> TLS_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x009d
> TLS_ECDHE_ECDSA_WITH_RC4_128_SHAuint16 = 0xc007
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHAuint16 = 0xc009
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHAuint16 = 0xc00a
> TLS_ECDHE_RSA_WITH_RC4_128_SHA  uint16 = 0xc011
> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xc012
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA  uint16 = 0xc013
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA  uint16 = 0xc014
> TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256   uint16 = 0xc02f
> TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 uint16 = 0xc02b
> TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384   uint16 = 0xc030
> TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 uint16 = 0xc02c
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4188) compilation issues with org.apache.kafka.clients.consumer.internals.Fetcher.java

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4188.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists

> compilation issues with 
> org.apache.kafka.clients.consumer.internals.Fetcher.java
> 
>
> Key: KAFKA-4188
> URL: https://issues.apache.org/jira/browse/KAFKA-4188
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
> Environment: Maven home: /maven325
> Java version: 1.8.0_40, vendor: Oracle Corporation
>Reporter: Martin Gainty
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> from client module
> org.apache.kafka.clients.consumer.internals.Fetcher.java wont compile here is 
> *one* of the errors:
> private PartitionRecords parseFetchedData(CompletedFetch 
> completedFetch) {
> //later on highWatermark is referenced in partition and produces ERROR
> this.sensors.recordsFetchLag.record(partition.highWatermark - 
> record.offset());
> /kafka/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java:[590,66]
>  cannot find symbol
> [ERROR] symbol:   variable highWatermark
> //assuming partition is TopicPartition partition I can correct by inserting :
> public long highWatermark =0L; //into TopicPartition
> is Fetcher.java producing correct behaviour?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4411) broker don't have access to kafka zookeeper nodes

2017-08-31 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4411.
--
Resolution: Not A Problem

 It is necessary to have the same principal name across all brokers for ZK 
Authentication.

> broker don't have access to kafka zookeeper nodes
> -
>
> Key: KAFKA-4411
> URL: https://issues.apache.org/jira/browse/KAFKA-4411
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, config
>Affects Versions: 0.9.0.1
> Environment: Red Hat Enterprise Linux Server release 7.0 
> Java 1.8.0_66-b17 
> Kafka 0.9.0.1
>Reporter: Mohammed amine GARMES
>Priority: Critical
>  Labels: security
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> I have 2 kafka servers configured to start with kafka security, I try to 
> start the akfka servers with the JASS below ==>
> server 1
>  KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/opt/kafka/config/kafka.keytab"
> principal="kafka/kafka1.test@test.net";
> };
> // ZooKeeper client authentication
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/opt/kafka/config/kafka.keytab"
> principal="kafka/kafka1.test@test.net";
> };
> server 2 :
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/opt/kafka/config/kafka.keytab"
> principal="kafka/kafka2.test@test.net";
> };
> // ZooKeeper client authentication
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/opt/kafka/config/kafka.keytab"
> principal="kafka/kafka2.test@test.net";
> };
> the problem:
> when I start the kafka server 1 all is fine, but when I try to start the 
> second server I have an issue because it haven't the access to the zookeeper 
> node (/brokers) for kafka. the all zookeeper path /brokers is blocked by the 
> first server, so the second server haven't the right access to write in this 
> path .
> The ACL of /brokers is the fqdn of the first server, normally  should be open 
> for all and close ACL of the path /broker/ids/1, in this case the second 
> server can write in /brokers and close the /brokers/ids/2 for him.
> I founded a solution but I am not sure that the right solution, I create a 
> new kakfa-kerberos user, so for all server I use the same user :
> Server1
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/opt/kafka/config/kafka.keytab"
> principal="kafka/kafka1.test@test.net";
> };
> // ZooKeeper client authentication
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/opt/kafka/config/kafkaZk.keytab"
> principal="kafka/kafkazk.test@test.net";
> };
> 
> Server2
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/opt/kafka/config/kafka.keytab"
> principal="kafka/kafka2.test@test.net";
> };
> // ZooKeeper client authentication
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/opt/kafka/config/kafkaZk.keytab"
> principal="kafka/kafkazk.test@test.net";
> };
> Can help me or clarify to me how I can use Kafka security correctly ?!!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4337) Create topic in multiple zookeepers with Kafka AdminUtils.CreateTopic JAVA API in Kafka 0.9.0.1 gives Error (Topic gets created in only one of the specified zookeeper)

2017-09-01 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16150144#comment-16150144
 ] 

Manikumar commented on KAFKA-4337:
--

As mentioned by [~ijuma], create command creates a topic using ZK Cluster for a 
particular Kafka Cluster. So we should pass host:port pairs of same ZK Cluster.

> Create topic in multiple zookeepers with Kafka AdminUtils.CreateTopic JAVA 
> API in Kafka 0.9.0.1 gives Error (Topic gets created in only one of the 
> specified zookeeper)
> ---
>
> Key: KAFKA-4337
> URL: https://issues.apache.org/jira/browse/KAFKA-4337
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: Bharat Patel
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  Want to use below code snippet to create topic in multiple zookeepers with 
> Kafka Java APIS. when I specify 2 zookeeprs IPs in zookeeperConnect variable 
> it only creates topic in anyone of zookeeper. Both the zookeeper are 2 
> different kafka clusters.
>  String zookeeperConnect = zookeeperIPs; // Multiple zookeeper IPs
>int sessionTimeoutMs = 10 * 1000;
>int connectionTimeoutMs = 8 * 1000;
>try {
> ZkClient zkClient = new ZkClient(
> zookeeperConnect,
> sessionTimeoutMs,
> connectionTimeoutMs,
> ZKStringSerializer$.MODULE$);
> boolean isSecureKafkaCluster = false;
> ZkUtils zkUtils = new ZkUtils(zkClient, 
> new ZkConnection(zookeeperConnect), isSecureKafkaCluster);
>  String topic1 = "nameofTopictobeCreated";
>  int partitions = 1;
>  int replication = 1;
>  Properties topicConfig = new Properties(); // add per-topic 
> configurations settings here
>  AdminUtils.createTopic(zkUtils, topic1, partitions, replication, 
> topicConfig);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4337) Create topic in multiple zookeepers with Kafka AdminUtils.CreateTopic JAVA API in Kafka 0.9.0.1 gives Error (Topic gets created in only one of the specified zookeeper)

2017-09-01 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4337.
--
Resolution: Won't Fix

> Create topic in multiple zookeepers with Kafka AdminUtils.CreateTopic JAVA 
> API in Kafka 0.9.0.1 gives Error (Topic gets created in only one of the 
> specified zookeeper)
> ---
>
> Key: KAFKA-4337
> URL: https://issues.apache.org/jira/browse/KAFKA-4337
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: Bharat Patel
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  Want to use below code snippet to create topic in multiple zookeepers with 
> Kafka Java APIS. when I specify 2 zookeeprs IPs in zookeeperConnect variable 
> it only creates topic in anyone of zookeeper. Both the zookeeper are 2 
> different kafka clusters.
>  String zookeeperConnect = zookeeperIPs; // Multiple zookeeper IPs
>int sessionTimeoutMs = 10 * 1000;
>int connectionTimeoutMs = 8 * 1000;
>try {
> ZkClient zkClient = new ZkClient(
> zookeeperConnect,
> sessionTimeoutMs,
> connectionTimeoutMs,
> ZKStringSerializer$.MODULE$);
> boolean isSecureKafkaCluster = false;
> ZkUtils zkUtils = new ZkUtils(zkClient, 
> new ZkConnection(zookeeperConnect), isSecureKafkaCluster);
>  String topic1 = "nameofTopictobeCreated";
>  int partitions = 1;
>  int replication = 1;
>  Properties topicConfig = new Properties(); // add per-topic 
> configurations settings here
>  AdminUtils.createTopic(zkUtils, topic1, partitions, replication, 
> topicConfig);



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4264) kafka-server-stop.sh fails is Kafka launched via kafka-server-start.sh

2017-09-01 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4264.
--
Resolution: Duplicate

This is related to KAFKA-4931. PR is available for KAFKA-4931. 

> kafka-server-stop.sh fails is Kafka launched via kafka-server-start.sh
> --
>
> Key: KAFKA-4264
> URL: https://issues.apache.org/jira/browse/KAFKA-4264
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.10.0.1
> Environment: Tested in Debian Jessy
>Reporter: Alex Schmitz
>Priority: Trivial
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> kafka-server-stop.sh greps for the process ID to kill with the following: 
> bq. PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk 
> '{print $1}')
> However, if Kafka is launched via the kafka-server-start.sh script, the 
> process doesn't include kafka.Kafka, the grep fails to find the process, and 
> it returns the failure message, No Kafka server to stop. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3059) ConsumerGroupCommand should allow resetting offsets for consumer groups

2017-09-04 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3059.
--
Resolution: Duplicate

This functionality is implemented in KAFKA-4743

> ConsumerGroupCommand should allow resetting offsets for consumer groups
> ---
>
> Key: KAFKA-3059
> URL: https://issues.apache.org/jira/browse/KAFKA-3059
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Jason Gustafson
>
> As discussed here:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3CCA%2BndhHpf3ib%3Ddsh9zvtfVjRiUjSz%2B%3D8umXm4myW%2BpBsbTYATAQ%40mail.gmail.com%3E
> * Given a consumer group, remove all stored offsets
> * Given a group and a topic, remove offset for group  and topic
> * Given a group, topic, partition and offset - set the offset for the 
> specified partition and group with the given value



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1505) Broker failed to start because of corrupted replication-offset-checkpoint file

2017-09-04 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1505.
--
Resolution: Fixed

This was fixed in newer versions.  Pl reopen if you think the issue still exists


> Broker failed to start because of corrupted replication-offset-checkpoint file
> --
>
> Key: KAFKA-1505
> URL: https://issues.apache.org/jira/browse/KAFKA-1505
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.1
> Environment: Window Server 2012;JDK1.7.0
>Reporter: Jie Tong
>Assignee: Neha Narkhede
>
> The replication-offset-checkpoint file seems to have some blank entries where 
> the topic, partition, offset information is expected.
> Broker service cannot be started with following error:
> [2014-06-18 11:09:51,272] ERROR [KafkaApi-5] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:2;ControllerEpoch:35;CorrelationId:11;ClientId:id_2-host_10.65.127.89-port_9092;PartitionState:(x,4)
>  -> 
> (LeaderAndIsrInfo:(Leader:14,ISR:14,LeaderEpoch:33,ControllerEpoch:35),ReplicationFactor:3),AllReplicas:14,5,6),(xm,20)
>  -> 
> (LeaderAndIsrInfo:(Leader:11,ISR:11,4,LeaderEpoch:11,ControllerEpoch:32),ReplicationFactor:3),AllReplicas:11,4,5),(xm,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:11,ISR:11,4,LeaderEpoch:28,ControllerEpoch:32),ReplicationFactor:3),AllReplicas:11,4,5),(xm_int,3)
>  -> 
> (LeaderAndIsrInfo:(Leader:17,ISR:17,18,LeaderEpoch:26,ControllerEpoch:30),ReplicationFactor:3),AllReplicas:5,17,18),(x_int,9)
>  -> 
> (LeaderAndIsrInfo:(Leader:11,ISR:11,LeaderEpoch:38,ControllerEpoch:35),ReplicationFactor:3),AllReplicas:11,5,6),(xm_int,11)
>  -> 
> (LeaderAndIsrInfo:(Leader:13,ISR:13,LeaderEpoch:29,ControllerEpoch:35),ReplicationFactor:3),AllReplicas:13,5,6),(x_int,3)
>  -> 
> (LeaderAndIsrInfo:(Leader:19,ISR:19,0,LeaderEpoch:25,ControllerEpoch:32),ReplicationFactor:3),AllReplicas:5,19,0),(xm,1)
>  -> 
> (LeaderAndIsrInfo:(Leader:12,ISR:12,LeaderEpoch:30,ControllerEpoch:35),ReplicationFactor:3),AllReplicas:12,5,6),(x,3)
>  -> 
> (LeaderAndIsrInfo:(Leader:13,ISR:13,4,LeaderEpoch:22,ControllerEpoch:32),ReplicationFactor:3),AllReplicas:13,4,5),(xm,14)
>  -> 
> (LeaderAndIsrInfo:(Leader:18,ISR:18,19,LeaderEpoch:24,ControllerEpoch:31),ReplicationFactor:3),AllReplicas:5,18,19),(xm_int,10)
>  -> 
> (LeaderAndIsrInfo:(Leader:12,ISR:12,4,LeaderEpoch:25,ControllerEpoch:32),ReplicationFactor:3),AllReplicas:12,4,5),(x,15)
>  -> 
> (LeaderAndIsrInfo:(Leader:17,ISR:17,16,LeaderEpoch:24,ControllerEpoch:30),ReplicationFactor:3),AllReplicas:5,16,17),(x_int,8)
>  -> 
> (LeaderAndIsrInfo:(Leader:10,ISR:10,4,LeaderEpoch:26,ControllerEpoch:32),ReplicationFactor:3),AllReplicas:10,4,5),(xm,21)
>  -> 
> (LeaderAndIsrInfo:(Leader:12,ISR:12,LeaderEpoch:8,ControllerEpoch:35),ReplicationFactor:3),AllReplicas:12,5,6);Leaders:id:11,host:25.126.81.157,port:9092,id:14,host:25.126.81.159,port:9092,id:12,host:25.126.81.158,port:9092,id:17,host:10.153.63.196,port:9092,id:18,host:10.153.63.214,port:9092,id:19,host:10.65.127.95,port:9092,id:13,host:10.65.127.93,port:9092,id:10,host:10.65.127.92,port:9092
>  (kafka.server.KafkaApis)
> java.lang.NumberFormatException: For input string: "  
>   
>   
>   "
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:481)
> at java.lang.Integer.parseInt(Integer.java:527)
> at 
> scala.collection.immutable.StringLike$class.toInt(StringLike.scala:231)
> at 
> scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
> at 
> kafka.server.OffsetCheckpoint.liftedTree2$1(OffsetCheckpoint.scala:77)
> at 
> kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:73)
> at 
> kafka.cluster.Partition.getOrCreateReplica(Partition.scala:91)
> at 
> kafka.server.ReplicaManager$$anonfun$makeFollowers$5.apply(ReplicaManager.scala:347)
> at 
> kafka.server.ReplicaManager$$anonfun$makeFollowers$5.apply(ReplicaManager.scala:346)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)

<    1   2   3   4   5   6   7   8   9   10   >