[jira] [Commented] (KAFKA-1521) Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)

2016-12-14 Thread Anish Khanzode (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15749639#comment-15749639
 ] 

Anish Khanzode commented on KAFKA-1521:
---

This is really a problem when kafka consumer is used in a embedded environment. 
I would love to have API that can let me pass pluggable metrics system or 
ignore if dont care.
Does new API also suffer from same issue? 

> Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)
> -
>
> Key: KAFKA-1521
> URL: https://issues.apache.org/jira/browse/KAFKA-1521
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0, 0.8.1.1
> Environment: Tomcat Container or Any other J2EE container
>Reporter: Bhavesh Mistry
>Assignee: Jun Rao
>Priority: Minor
>
> Hi Kafka Team,
> We are running multiple webapps in tomcat container, and we have producer 
> which are managed by the ServletContextListener (Lifecycle).  Upon  
> contextInitialized we create and on contextDestroyed we call the 
> producer.close() but underlying Metric Lib does not shutdown.  So we have 
> thread leak due to this issue.  I had to call 
> Metrics.defaultRegistry().shutdown() to resolve this issue.  is this know 
> issue ? I know the metric lib have JVM Shutdown hook, but it will not be 
> invoke since the contain thread is un-deploying the web app and class loader 
> goes way and leaking thread does not find the under lying Kafka class.
> Because of this tomcat, it not shutting down gracefully.
> Are you guys planing to un-register metrics when Producer close is called or 
> shutdown Metrics pool for client.id ? 
> Here is logs:
> SEVERE: The web application [  ] appears to have started a thread named 
> [metrics-meter-tick-thread-1] but has failed to stop it. This is very likely 
> to create a memory leak.
> SEVERE: The web application [] appears to have started a thread named 
> [metrics-meter-tick-thread-2] but has failed to stop it. This is very likely 
> to create a memory leak.
> Thanks,
> Bhavesh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3071) Kafka Server 0.8.2 ERROR OOME with siz

2016-12-09 Thread Anish Khanzode (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15735600#comment-15735600
 ] 

Anish Khanzode commented on KAFKA-3071:
---

Is this an issue that needs attention. I see my consumer JVM sometime dies.

> Kafka Server 0.8.2 ERROR OOME with siz
> --
>
> Key: KAFKA-3071
> URL: https://issues.apache.org/jira/browse/KAFKA-3071
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2.0
> Environment: Linux * 2.6.32-431.23.3.el6.x86_64 #1 SMP Wed 
> Jul 16 06:12:23 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: vijay bhaskar
>  Labels: build
> Fix For: 0.8.2.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> [2016-01-06 12:34:18.819-0700] INFO Truncating log hughes-order-status-73 to 
> offset 18. (kafka.log.Log)
> [2016-01-06 12:34:18.819-0700] INFO Truncating log troubleshoot-completed-125 
> to offset 3. (kafka.log.Log)
> [2016-01-06 12:34:19.064-0700] DEBUG Scheduling task highwatermark-checkpoint 
> with initial delay 0 ms and period 5000 ms. (kafka.utils.KafkaScheduler)
> [2016-01-06 12:34:19.106-0700] DEBUG Scheduling task [__consumer_offsets,0] 
> with initial delay 0 ms and period -1 ms. (kafka.utils.KafkaScheduler)
> [2016-01-06 12:34:19.106-0700] INFO Loading offsets from 
> [__consumer_offsets,0] (kafka.server.OffsetManager)
> [2016-01-06 12:34:19.108-0700] INFO Finished loading offsets from 
> [__consumer_offsets,0] in 2 milliseconds. (kafka.server.OffsetManager)
> [2016-01-06 12:34:27.023-0700] ERROR OOME with size 743364196 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
> at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
> at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
> at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
> at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
> at kafka.network.BlockingChannel.receive(BlockingChannel.scala:108)
> at 
> kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:72)
> at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69)
> at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:113)
> at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:113)
> at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:113)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:112)
> at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:112)
> at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:112)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:111)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:97)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:89)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> [2016-01-06 12:34:32.003-0700] ERROR OOME with size 743364196 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
> at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
> at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
> at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
> at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
> at kafka.network.BlockingChannel.receive(BlockingChannel.scala:108)
> at 
> kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:80)
> at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69)
> at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:113)
> at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:113)
> at 

[jira] [Commented] (KAFKA-1041) Number of file handles increases indefinitely in producer if broker host is unresolvable

2014-02-28 Thread Anish Khanzode (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13916100#comment-13916100
 ] 

Anish Khanzode commented on KAFKA-1041:
---

Is 0.8.1 released? Can I get this applied on a released stable branch?

> Number of file handles increases indefinitely in producer if broker host is 
> unresolvable
> 
>
> Key: KAFKA-1041
> URL: https://issues.apache.org/jira/browse/KAFKA-1041
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: *unix*
>Reporter: Rajasekar Elango
>Assignee: Rajasekar Elango
>  Labels: features, newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1041-patch.diff
>
>
> We found a issue that if broker host is un resolvable, the number of file 
> handle keep increasing for every message we produce and eventually it uses up 
> all available files handles in operating system. If broker itself is not 
> running and broker host name is resolvable, open file handles count stays 
> flat.
> lsof output shows number of these open file handles continue to grow for 
> every message we produce.
>  java  19631relango   81u sock0,6  0t0  
> 196966526 can't identify protocol
> I can easily reproduce this with console producer,  If I run console producer 
> with right hostname and if broker is not running, the console producer will 
> exit after three tries. But If I run console producer with unresolvable 
> broker, it throws below exception and continues to wait for user input, every 
> time I enter new message, it opens socket and file handle count keeps 
> increasing.. 
> Here is Exception in producer
> ERROR fetching topic metadata for topics [Set(test-1378245487417)] from 
> broker [ArrayBuffer(id:0,host:localhost1,port:6667)] failed 
> (kafka.utils.Utils$)
> kafka.common.KafkaException: fetching topic metadata for topics 
> [Set(test-1378245487417)] from broker 
> [ArrayBuffer(id:0,host:localhost1,port:6667)] failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:51)
> at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
> at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
> at kafka.utils.Utils$.swallow(Utils.scala:186)
> at kafka.utils.Logging$class.swallowError(Logging.scala:105)
> at kafka.utils.Utils$.swallowError(Utils.scala:45)
> at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
> at 
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
> at scala.collection.immutable.Stream.foreach(Stream.scala:526)
> at 
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
> at 
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> Caused by: java.nio.channels.UnresolvedAddressException
> at sun.nio.ch.Net.checkAddress(Net.java:30)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:487)
> at kafka.network.BlockingChannel.connect(BlockingChannel.scala:59)
> at kafka.producer.SyncProducer.connect(SyncProducer.scala:151)
> at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:166)
> at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:73)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:117)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:37)
> ... 12 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (KAFKA-1041) Number of file handles increases indefinitely in producer if broker host is unresolvable

2014-02-28 Thread Anish Khanzode (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anish Khanzode updated KAFKA-1041:
--

Attachment: (was: KAFKA-1041-patch.diff)

> Number of file handles increases indefinitely in producer if broker host is 
> unresolvable
> 
>
> Key: KAFKA-1041
> URL: https://issues.apache.org/jira/browse/KAFKA-1041
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: *unix*
>Reporter: Rajasekar Elango
>Assignee: Neha Narkhede
>  Labels: features, newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1041-patch.diff
>
>
> We found a issue that if broker host is un resolvable, the number of file 
> handle keep increasing for every message we produce and eventually it uses up 
> all available files handles in operating system. If broker itself is not 
> running and broker host name is resolvable, open file handles count stays 
> flat.
> lsof output shows number of these open file handles continue to grow for 
> every message we produce.
>  java  19631relango   81u sock0,6  0t0  
> 196966526 can't identify protocol
> I can easily reproduce this with console producer,  If I run console producer 
> with right hostname and if broker is not running, the console producer will 
> exit after three tries. But If I run console producer with unresolvable 
> broker, it throws below exception and continues to wait for user input, every 
> time I enter new message, it opens socket and file handle count keeps 
> increasing.. 
> Here is Exception in producer
> ERROR fetching topic metadata for topics [Set(test-1378245487417)] from 
> broker [ArrayBuffer(id:0,host:localhost1,port:6667)] failed 
> (kafka.utils.Utils$)
> kafka.common.KafkaException: fetching topic metadata for topics 
> [Set(test-1378245487417)] from broker 
> [ArrayBuffer(id:0,host:localhost1,port:6667)] failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:51)
> at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
> at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
> at kafka.utils.Utils$.swallow(Utils.scala:186)
> at kafka.utils.Logging$class.swallowError(Logging.scala:105)
> at kafka.utils.Utils$.swallowError(Utils.scala:45)
> at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
> at 
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
> at scala.collection.immutable.Stream.foreach(Stream.scala:526)
> at 
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
> at 
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> Caused by: java.nio.channels.UnresolvedAddressException
> at sun.nio.ch.Net.checkAddress(Net.java:30)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:487)
> at kafka.network.BlockingChannel.connect(BlockingChannel.scala:59)
> at kafka.producer.SyncProducer.connect(SyncProducer.scala:151)
> at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:166)
> at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:73)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:117)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:37)
> ... 12 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (KAFKA-1041) Number of file handles increases indefinitely in producer if broker host is unresolvable

2014-02-28 Thread Anish Khanzode (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anish Khanzode updated KAFKA-1041:
--

Attachment: KAFKA-1041-patch.diff

Here is the updated patch for 0.8


> Number of file handles increases indefinitely in producer if broker host is 
> unresolvable
> 
>
> Key: KAFKA-1041
> URL: https://issues.apache.org/jira/browse/KAFKA-1041
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: *unix*
>Reporter: Rajasekar Elango
>Assignee: Neha Narkhede
>  Labels: features, newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1041-patch.diff
>
>
> We found a issue that if broker host is un resolvable, the number of file 
> handle keep increasing for every message we produce and eventually it uses up 
> all available files handles in operating system. If broker itself is not 
> running and broker host name is resolvable, open file handles count stays 
> flat.
> lsof output shows number of these open file handles continue to grow for 
> every message we produce.
>  java  19631relango   81u sock0,6  0t0  
> 196966526 can't identify protocol
> I can easily reproduce this with console producer,  If I run console producer 
> with right hostname and if broker is not running, the console producer will 
> exit after three tries. But If I run console producer with unresolvable 
> broker, it throws below exception and continues to wait for user input, every 
> time I enter new message, it opens socket and file handle count keeps 
> increasing.. 
> Here is Exception in producer
> ERROR fetching topic metadata for topics [Set(test-1378245487417)] from 
> broker [ArrayBuffer(id:0,host:localhost1,port:6667)] failed 
> (kafka.utils.Utils$)
> kafka.common.KafkaException: fetching topic metadata for topics 
> [Set(test-1378245487417)] from broker 
> [ArrayBuffer(id:0,host:localhost1,port:6667)] failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:51)
> at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
> at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
> at kafka.utils.Utils$.swallow(Utils.scala:186)
> at kafka.utils.Logging$class.swallowError(Logging.scala:105)
> at kafka.utils.Utils$.swallowError(Utils.scala:45)
> at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
> at 
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
> at scala.collection.immutable.Stream.foreach(Stream.scala:526)
> at 
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
> at 
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> Caused by: java.nio.channels.UnresolvedAddressException
> at sun.nio.ch.Net.checkAddress(Net.java:30)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:487)
> at kafka.network.BlockingChannel.connect(BlockingChannel.scala:59)
> at kafka.producer.SyncProducer.connect(SyncProducer.scala:151)
> at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:166)
> at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:73)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:117)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:37)
> ... 12 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (KAFKA-1041) Number of file handles increases indefinitely in producer if broker host is unresolvable

2014-02-28 Thread Anish Khanzode (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13915861#comment-13915861
 ] 

Anish Khanzode edited comment on KAFKA-1041 at 2/28/14 3:22 PM:


Here is the updated patch for 0.8
Thanks for looking into it.


was (Author: akhanzode):
Here is the updated patch for 0.8


> Number of file handles increases indefinitely in producer if broker host is 
> unresolvable
> 
>
> Key: KAFKA-1041
> URL: https://issues.apache.org/jira/browse/KAFKA-1041
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: *unix*
>Reporter: Rajasekar Elango
>Assignee: Neha Narkhede
>  Labels: features, newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1041-patch.diff
>
>
> We found a issue that if broker host is un resolvable, the number of file 
> handle keep increasing for every message we produce and eventually it uses up 
> all available files handles in operating system. If broker itself is not 
> running and broker host name is resolvable, open file handles count stays 
> flat.
> lsof output shows number of these open file handles continue to grow for 
> every message we produce.
>  java  19631relango   81u sock0,6  0t0  
> 196966526 can't identify protocol
> I can easily reproduce this with console producer,  If I run console producer 
> with right hostname and if broker is not running, the console producer will 
> exit after three tries. But If I run console producer with unresolvable 
> broker, it throws below exception and continues to wait for user input, every 
> time I enter new message, it opens socket and file handle count keeps 
> increasing.. 
> Here is Exception in producer
> ERROR fetching topic metadata for topics [Set(test-1378245487417)] from 
> broker [ArrayBuffer(id:0,host:localhost1,port:6667)] failed 
> (kafka.utils.Utils$)
> kafka.common.KafkaException: fetching topic metadata for topics 
> [Set(test-1378245487417)] from broker 
> [ArrayBuffer(id:0,host:localhost1,port:6667)] failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:51)
> at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
> at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
> at kafka.utils.Utils$.swallow(Utils.scala:186)
> at kafka.utils.Logging$class.swallowError(Logging.scala:105)
> at kafka.utils.Utils$.swallowError(Utils.scala:45)
> at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
> at 
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
> at scala.collection.immutable.Stream.foreach(Stream.scala:526)
> at 
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
> at 
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> Caused by: java.nio.channels.UnresolvedAddressException
> at sun.nio.ch.Net.checkAddress(Net.java:30)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:487)
> at kafka.network.BlockingChannel.connect(BlockingChannel.scala:59)
> at kafka.producer.SyncProducer.connect(SyncProducer.scala:151)
> at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:166)
> at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:73)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:117)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:37)
> ... 12 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (KAFKA-1041) Number of file handles increases indefinitely in producer if broker host is unresolvable

2014-02-27 Thread Anish Khanzode (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anish Khanzode updated KAFKA-1041:
--

Attachment: KAFKA-1041-patch.diff

Is this good enough for this?

> Number of file handles increases indefinitely in producer if broker host is 
> unresolvable
> 
>
> Key: KAFKA-1041
> URL: https://issues.apache.org/jira/browse/KAFKA-1041
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: *unix*
>Reporter: Rajasekar Elango
>Assignee: Neha Narkhede
>  Labels: features, newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1041-patch.diff
>
>
> We found a issue that if broker host is un resolvable, the number of file 
> handle keep increasing for every message we produce and eventually it uses up 
> all available files handles in operating system. If broker itself is not 
> running and broker host name is resolvable, open file handles count stays 
> flat.
> lsof output shows number of these open file handles continue to grow for 
> every message we produce.
>  java  19631relango   81u sock0,6  0t0  
> 196966526 can't identify protocol
> I can easily reproduce this with console producer,  If I run console producer 
> with right hostname and if broker is not running, the console producer will 
> exit after three tries. But If I run console producer with unresolvable 
> broker, it throws below exception and continues to wait for user input, every 
> time I enter new message, it opens socket and file handle count keeps 
> increasing.. 
> Here is Exception in producer
> ERROR fetching topic metadata for topics [Set(test-1378245487417)] from 
> broker [ArrayBuffer(id:0,host:localhost1,port:6667)] failed 
> (kafka.utils.Utils$)
> kafka.common.KafkaException: fetching topic metadata for topics 
> [Set(test-1378245487417)] from broker 
> [ArrayBuffer(id:0,host:localhost1,port:6667)] failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:51)
> at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
> at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
> at kafka.utils.Utils$.swallow(Utils.scala:186)
> at kafka.utils.Logging$class.swallowError(Logging.scala:105)
> at kafka.utils.Utils$.swallowError(Utils.scala:45)
> at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
> at 
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
> at scala.collection.immutable.Stream.foreach(Stream.scala:526)
> at 
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
> at 
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> Caused by: java.nio.channels.UnresolvedAddressException
> at sun.nio.ch.Net.checkAddress(Net.java:30)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:487)
> at kafka.network.BlockingChannel.connect(BlockingChannel.scala:59)
> at kafka.producer.SyncProducer.connect(SyncProducer.scala:151)
> at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:166)
> at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:73)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:117)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:37)
> ... 12 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (KAFKA-1041) Number of file handles increases indefinitely in producer if broker host is unresolvable

2014-02-27 Thread Anish Khanzode (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anish Khanzode updated KAFKA-1041:
--

Status: Patch Available  (was: Open)

> Number of file handles increases indefinitely in producer if broker host is 
> unresolvable
> 
>
> Key: KAFKA-1041
> URL: https://issues.apache.org/jira/browse/KAFKA-1041
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: *unix*
>Reporter: Rajasekar Elango
>Assignee: Neha Narkhede
>  Labels: features, newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1041-patch.diff
>
>
> We found a issue that if broker host is un resolvable, the number of file 
> handle keep increasing for every message we produce and eventually it uses up 
> all available files handles in operating system. If broker itself is not 
> running and broker host name is resolvable, open file handles count stays 
> flat.
> lsof output shows number of these open file handles continue to grow for 
> every message we produce.
>  java  19631relango   81u sock0,6  0t0  
> 196966526 can't identify protocol
> I can easily reproduce this with console producer,  If I run console producer 
> with right hostname and if broker is not running, the console producer will 
> exit after three tries. But If I run console producer with unresolvable 
> broker, it throws below exception and continues to wait for user input, every 
> time I enter new message, it opens socket and file handle count keeps 
> increasing.. 
> Here is Exception in producer
> ERROR fetching topic metadata for topics [Set(test-1378245487417)] from 
> broker [ArrayBuffer(id:0,host:localhost1,port:6667)] failed 
> (kafka.utils.Utils$)
> kafka.common.KafkaException: fetching topic metadata for topics 
> [Set(test-1378245487417)] from broker 
> [ArrayBuffer(id:0,host:localhost1,port:6667)] failed
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:51)
> at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
> at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
> at kafka.utils.Utils$.swallow(Utils.scala:186)
> at kafka.utils.Logging$class.swallowError(Logging.scala:105)
> at kafka.utils.Utils$.swallowError(Utils.scala:45)
> at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
> at 
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
> at 
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
> at scala.collection.immutable.Stream.foreach(Stream.scala:526)
> at 
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
> at 
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> Caused by: java.nio.channels.UnresolvedAddressException
> at sun.nio.ch.Net.checkAddress(Net.java:30)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:487)
> at kafka.network.BlockingChannel.connect(BlockingChannel.scala:59)
> at kafka.producer.SyncProducer.connect(SyncProducer.scala:151)
> at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:166)
> at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:73)
> at kafka.producer.SyncProducer.send(SyncProducer.scala:117)
> at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:37)
> ... 12 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)