[jira] [Created] (KAFKA-4555) Using Hamcrest for easy intent expression in tests

2016-12-18 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-4555:
--

 Summary: Using Hamcrest for easy intent expression in tests
 Key: KAFKA-4555
 URL: https://issues.apache.org/jira/browse/KAFKA-4555
 Project: Kafka
  Issue Type: Test
  Components: unit tests
Affects Versions: 0.9.0.1
Reporter: Rekha Joshi
Assignee: Rekha Joshi
Priority: Minor


Using Hamcrest for easy intent expression in tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4500) Kafka Code Improvements

2016-12-07 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-4500:
---
Summary: Kafka Code Improvements  (was: Code Improvements)

> Kafka Code Improvements
> ---
>
> Key: KAFKA-4500
> URL: https://issues.apache.org/jira/browse/KAFKA-4500
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.2
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Minor
>
> Code Corrections on clients module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4500) Code Improvements

2016-12-07 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-4500:
---
Priority: Minor  (was: Trivial)

> Code Improvements
> -
>
> Key: KAFKA-4500
> URL: https://issues.apache.org/jira/browse/KAFKA-4500
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.2
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Minor
>
> Code Corrections on clients module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4500) Code Improvements

2016-12-07 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-4500:
---
Summary: Code Improvements  (was: Code Corrections)

> Code Improvements
> -
>
> Key: KAFKA-4500
> URL: https://issues.apache.org/jira/browse/KAFKA-4500
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.2
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Trivial
>
> Code Corrections on clients module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3796) SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk

2016-12-07 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-3796:
---
  Priority: Trivial  (was: Major)
Issue Type: Improvement  (was: Bug)

> SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk
> --
>
> Key: KAFKA-3796
> URL: https://issues.apache.org/jira/browse/KAFKA-3796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, security
>Affects Versions: 0.10.0.0
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Trivial
>
> org.apache.kafka.common.network.SslTransportLayerTest > 
> testEndpointIdentificationDisabled FAILED
> java.net.BindException: Can't assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:48)
> at 
> org.apache.kafka.common.network.SslTransportLayerTest.testEndpointIdentificationDisabled(SslTransportLayerTest.java:120)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4500) Code Corrections

2016-12-06 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-4500:
--

 Summary: Code Corrections
 Key: KAFKA-4500
 URL: https://issues.apache.org/jira/browse/KAFKA-4500
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.9.0.2
Reporter: Rekha Joshi
Assignee: Rekha Joshi
Priority: Trivial


Code Corrections on clients module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3893) Kafka Broker ID disappears from /brokers/ids

2016-11-29 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-3893:
---
Summary: Kafka Broker ID disappears from /brokers/ids  (was: Kafka Broker 
ID disappears from /borkers/ids)

> Kafka Broker ID disappears from /brokers/ids
> 
>
> Key: KAFKA-3893
> URL: https://issues.apache.org/jira/browse/KAFKA-3893
> Project: Kafka
>  Issue Type: Bug
>Reporter: chaitra
>Priority: Critical
>
> Kafka version used : 0.8.2.1 
> Zookeeper version: 3.4.6
> We have scenario where kafka 's broker in  zookeeper path /brokers/ids just 
> disappears.
> We see the zookeeper connection active and no network issue.
> The zookeeper conection timeout is set to 6000ms in server.properties
> Hence Kafka not participating in cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3893) Kafka Broker ID disappears from /borkers/ids

2016-11-29 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-3893:
---
Summary: Kafka Broker ID disappears from /borkers/ids  (was: Kafka Borker 
ID disappears from /borkers/ids)

> Kafka Broker ID disappears from /borkers/ids
> 
>
> Key: KAFKA-3893
> URL: https://issues.apache.org/jira/browse/KAFKA-3893
> Project: Kafka
>  Issue Type: Bug
>Reporter: chaitra
>Priority: Critical
>
> Kafka version used : 0.8.2.1 
> Zookeeper version: 3.4.6
> We have scenario where kafka 's broker in  zookeeper path /brokers/ids just 
> disappears.
> We see the zookeeper connection active and no network issue.
> The zookeeper conection timeout is set to 6000ms in server.properties
> Hence Kafka not participating in cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3893) Kafka Borker ID disappears from /borkers/ids

2016-11-29 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616914#comment-15616914
 ] 

Rekha Joshi edited comment on KAFKA-3893 at 11/30/16 2:58 AM:
--

We saw it too in Kafka 0.8.x and upgraded to Kafka 0.9.x and continued seeing 
this issue, especially in AWS.This was seen in AWS especially if you configure 
zookeeper connect property with zookeeper ELB instead of individual IP:port 
set. When we reversed zookeeper connect with zookeeper IP:port instead of 
zookeeper ELB, it worked great.
It would be great(can help) if Apache Kafka/Confluent FAQ can be updated to 
note this.
Thanks
Rekha



was (Author: rekhajoshm):
We saw it too in Kafka 0.8.x and upgraded to Kafka 0.9.x and continued seeing 
this issue.Going with workarounds.But it would be great(can help) if this is 
documented clearly on Apache Kafka/Confluent FAQ.
Thanks
Rekha


> Kafka Borker ID disappears from /borkers/ids
> 
>
> Key: KAFKA-3893
> URL: https://issues.apache.org/jira/browse/KAFKA-3893
> Project: Kafka
>  Issue Type: Bug
>Reporter: chaitra
>Priority: Critical
>
> Kafka version used : 0.8.2.1 
> Zookeeper version: 3.4.6
> We have scenario where kafka 's broker in  zookeeper path /brokers/ids just 
> disappears.
> We see the zookeeper connection active and no network issue.
> The zookeeper conection timeout is set to 6000ms in server.properties
> Hence Kafka not participating in cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3893) Kafka Borker ID disappears from /borkers/ids

2016-10-28 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616914#comment-15616914
 ] 

Rekha Joshi commented on KAFKA-3893:


We saw it too in Kafka 0.8.x and upgraded to Kafka 0.9.x and continued seeing 
this issue.Going with workarounds.But it would be great(can help) if this is 
documented clearly on Apache Kafka/Confluent FAQ.
Thanks
Rekha


> Kafka Borker ID disappears from /borkers/ids
> 
>
> Key: KAFKA-3893
> URL: https://issues.apache.org/jira/browse/KAFKA-3893
> Project: Kafka
>  Issue Type: Bug
>Reporter: chaitra
>Priority: Critical
>
> Kafka version used : 0.8.2.1 
> Zookeeper version: 3.4.6
> We have scenario where kafka 's broker in  zookeeper path /brokers/ids just 
> disappears.
> We see the zookeeper connection active and no network issue.
> The zookeeper conection timeout is set to 6000ms in server.properties
> Hence Kafka not participating in cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4014) Use Collections.singletonList instead of Arrays.asList for single arguments

2016-08-02 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-4014:
---
Summary: Use Collections.singletonList instead of Arrays.asList for single 
arguments  (was: Use Collections.singletonList instead of Arrays.asList for 
unmodified single arguments)

> Use Collections.singletonList instead of Arrays.asList for single arguments
> ---
>
> Key: KAFKA-4014
> URL: https://issues.apache.org/jira/browse/KAFKA-4014
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Minor
>
> Usage of Collections.singletonList instead of Arrays.asList for single 
> arguments better



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4014) Use Collections.singletonList instead of Arrays.asList for single arguments

2016-08-02 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-4014:
---
Description: Usage of Collections.singletonList instead of Arrays.asList 
for single arguments better for performance  (was: Usage of 
Collections.singletonList instead of Arrays.asList for single arguments better)

> Use Collections.singletonList instead of Arrays.asList for single arguments
> ---
>
> Key: KAFKA-4014
> URL: https://issues.apache.org/jira/browse/KAFKA-4014
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Minor
>
> Usage of Collections.singletonList instead of Arrays.asList for single 
> arguments better for performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4014) Use Collections.singletonList instead of Arrays.asList for unmodified single arguments

2016-08-02 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-4014:
--

 Summary: Use Collections.singletonList instead of Arrays.asList 
for unmodified single arguments
 Key: KAFKA-4014
 URL: https://issues.apache.org/jira/browse/KAFKA-4014
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.1
Reporter: Rekha Joshi
Priority: Minor


Usage of Collections.singletonList instead of Arrays.asList for single 
arguments better



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4014) Use Collections.singletonList instead of Arrays.asList for unmodified single arguments

2016-08-02 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-4014:
---
Issue Type: Improvement  (was: Bug)

> Use Collections.singletonList instead of Arrays.asList for unmodified single 
> arguments
> --
>
> Key: KAFKA-4014
> URL: https://issues.apache.org/jira/browse/KAFKA-4014
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Rekha Joshi
>Priority: Minor
>
> Usage of Collections.singletonList instead of Arrays.asList for single 
> arguments better



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4014) Use Collections.singletonList instead of Arrays.asList for unmodified single arguments

2016-08-02 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-4014:
--

Assignee: Rekha Joshi

> Use Collections.singletonList instead of Arrays.asList for unmodified single 
> arguments
> --
>
> Key: KAFKA-4014
> URL: https://issues.apache.org/jira/browse/KAFKA-4014
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Minor
>
> Usage of Collections.singletonList instead of Arrays.asList for single 
> arguments better



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4010) ConfigDef.toRst() should create sections for each group

2016-08-02 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-4010:
--

Assignee: Rekha Joshi

> ConfigDef.toRst() should create sections for each group
> ---
>
> Key: KAFKA-4010
> URL: https://issues.apache.org/jira/browse/KAFKA-4010
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Shikhar Bhushan
>Assignee: Rekha Joshi
>Priority: Minor
>
> Currently the ordering seems a bit arbitrary. There is a logical grouping 
> that connectors are now able to specify with the 'group' field, which we 
> should use as section headers. Also it would be good to generate {{:ref:}} 
> for each section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3905) remove null from subscribed topics in KafkaConsumer#subscribe

2016-06-27 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-3905:
--

Assignee: Rekha Joshi

> remove null from subscribed topics  in KafkaConsumer#subscribe
> --
>
> Key: KAFKA-3905
> URL: https://issues.apache.org/jira/browse/KAFKA-3905
> Project: Kafka
>  Issue Type: Wish
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: Xing Huang
>Assignee: Rekha Joshi
>Priority: Minor
>
> Currently, KafkaConsumer's subscribe methods accept Collection as 
> topics to be subscribed, but a Collection may have null as its element. For 
> example
> {code}
> String topic = null;
> Collection topics = Arrays.asList(topic);
> consumer.subscribe(topics)
> {code}
> When this happens, consumer will throw a puzzling NullPointerException:
> {code}
>   at org.apache.kafka.common.utils.Utils.utf8Length(Utils.java:245)
>   at org.apache.kafka.common.protocol.types.Type$6.sizeOf(Type.java:248)
>   at 
> org.apache.kafka.common.protocol.types.ArrayOf.sizeOf(ArrayOf.java:85)
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:89)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:244)
>   at 
> org.apache.kafka.common.requests.RequestSend.serialize(RequestSend.java:35)
>   at 
> org.apache.kafka.common.requests.RequestSend.(RequestSend.java:29)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.request(NetworkClient.java:616)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:639)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:552)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:258)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:179)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:970)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:934)
> {code}
> Maybe it's better to remove null when doing subscription.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3844) Sort configuration items in log

2016-06-27 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-3844:
--

Assignee: Rekha Joshi

> Sort configuration items in log
> ---
>
> Key: KAFKA-3844
> URL: https://issues.apache.org/jira/browse/KAFKA-3844
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: Xing Huang
>Assignee: Rekha Joshi
>Priority: Trivial
>
> Currently, the output of 
> org.apache.kafka.common.config.AbstractConfig#logAll() is unsorted, so it's 
> not convenient to check related configurations. The configuration items in 
> log could be sorted, so that related items be adjacent. For example, all 
> "log.*" configuration items would be adjacent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3796) SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk

2016-06-07 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319236#comment-15319236
 ] 

Rekha Joshi commented on KAFKA-3796:


Actually strangely I was not able to replicate this next day, with same build 
and with latest trunk build.So ofcourse inconsistent, but maybe something to do 
with my computer at the time.Anyhow, I will keep it open for couple of days, if 
I never notice this again, will close it then. HTW. Thanks [~ijuma]

> SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk
> --
>
> Key: KAFKA-3796
> URL: https://issues.apache.org/jira/browse/KAFKA-3796
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, security
>Affects Versions: 0.10.0.0
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>
> org.apache.kafka.common.network.SslTransportLayerTest > 
> testEndpointIdentificationDisabled FAILED
> java.net.BindException: Can't assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:48)
> at 
> org.apache.kafka.common.network.SslTransportLayerTest.testEndpointIdentificationDisabled(SslTransportLayerTest.java:120)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3061) Get rid of Guava dependency

2016-06-07 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-3061:
--

Assignee: Rekha Joshi

> Get rid of Guava dependency
> ---
>
> Key: KAFKA-3061
> URL: https://issues.apache.org/jira/browse/KAFKA-3061
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Rekha Joshi
>
> KAFKA-2422 adds Reflections library to KafkaConnect, which depends on Guava.
> Since lots of people want to use Guavas, having it in the framework will lead 
> to conflicts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3796) SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk

2016-06-06 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-3796:
--

 Summary: SslTransportLayerTest.testInvalidEndpointIdentification 
fails on trunk
 Key: KAFKA-3796
 URL: https://issues.apache.org/jira/browse/KAFKA-3796
 Project: Kafka
  Issue Type: Bug
  Components: clients, security
Affects Versions: 0.10.0.0
Reporter: Rekha Joshi
Assignee: Rekha Joshi


org.apache.kafka.common.network.SslTransportLayerTest > 
testEndpointIdentificationDisabled FAILED
java.net.BindException: Can't assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:48)
at 
org.apache.kafka.common.network.SslTransportLayerTest.testEndpointIdentificationDisabled(SslTransportLayerTest.java:120)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-724) Allow automatic socket.send.buffer from operating system

2016-06-04 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-724:
-

Assignee: Rekha Joshi

> Allow automatic socket.send.buffer from operating system
> 
>
> Key: KAFKA-724
> URL: https://issues.apache.org/jira/browse/KAFKA-724
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Pablo Barrera
>Assignee: Rekha Joshi
>  Labels: newbie
>
> To do this, don't call to socket().setXXXBufferSize. This can be 
> controlled by the configuration parameter: if the value socket.send.buffer or 
> others are set to -1, don't call to socket().setXXXBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3771) Improving Kafka code

2016-05-30 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-3771:
--

 Summary: Improving Kafka code
 Key: KAFKA-3771
 URL: https://issues.apache.org/jira/browse/KAFKA-3771
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.10.0.0
Reporter: Rekha Joshi
Assignee: Rekha Joshi


Improve Kafka core code :

Remove redundant val modifier for case class constructor
Use flatMap instead of map and flatten
Use isEmpty, NonEmpty, isDefined as appropriate
Use head, keys and keySet where appropriate
Use contains, diff and find where appropriate
toString has no parameters, no side effect hence without () use consistently
Remove unnecessary return and semi colons, parentheses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3765) Code style issues in Kafka

2016-05-27 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-3765:
--

 Summary: Code style issues in Kafka
 Key: KAFKA-3765
 URL: https://issues.apache.org/jira/browse/KAFKA-3765
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.10.0.0
Reporter: Rekha Joshi
Assignee: Rekha Joshi
Priority: Minor


Code style issues in Kafka trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3756) javadoc has issues with incorrect @param, @throws, @return

2016-05-25 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-3756:
--

 Summary: javadoc has issues with incorrect @param, @throws, 
@return 
 Key: KAFKA-3756
 URL: https://issues.apache.org/jira/browse/KAFKA-3756
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Affects Versions: 0.8.2.0
Reporter: Rekha Joshi
Assignee: Rekha Joshi
Priority: Minor


javadoc has issues with incorrect @param, @throws, @return 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-16 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149068#comment-15149068
 ] 

Rekha Joshi edited comment on KAFKA-3238 at 2/17/16 3:46 AM:
-

anyone home? :) This has similarity to KAFKA-2048, but as per my investigation 
did not see the IllegalMonitorStateException in logs.The trace here suggest the 
underlying fetcher thread is stopped.Thankfully for having logs issue 
corrected, KAFKA-1891 is fixed in Kafka 0.9. Maybe with mirrormaker refactored 
in 0.9 and few other fixes, Kafka 0.9 is one option? Anyhow it requires uphaul 
of a production system at my end, so wondering if there are any other 
suggestions?
Would be great to have your inputs!  thanks [~nehanarkhede] [~junrao] [~jkreps]


was (Author: rekhajoshm):
anyone home? :) 
This has similarity to KAFKA-2048, but as per my investigation did not see the 
IllegalMonitorStateException in logs.But the underlying fetcher thread to be  
completely stopped.The trace substantiate that. Its not clearly mentioned but 
was KAFKA-2048 is fixed in 0.9?
would be great to have your inputs!  thanks [~nehanarkhede] [~junrao] [~jkreps]

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked 

[jira] [Comment Edited] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-16 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149068#comment-15149068
 ] 

Rekha Joshi edited comment on KAFKA-3238 at 2/16/16 9:29 PM:
-

anyone home? :) 
This has similarity to KAFKA-2048, but as per my investigation did not see the 
IllegalMonitorStateException in logs.But the underlying fetcher thread to be  
completely stopped.The trace substantiate that. Its not clearly mentioned but 
was KAFKA-2048 is fixed in 0.9?
would be great to have your inputs!  thanks [~nehanarkhede] [~junrao] [~jkreps]


was (Author: rekhajoshm):
anyone home? :) would be great to have your inputs! thanks [~nehanarkhede] 
[~junrao] [~jkreps]

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
> at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
> at sun.nio.ch.IOUtil.write(IOUtil.java:148)
> at 

[jira] [Commented] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-16 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149068#comment-15149068
 ] 

Rekha Joshi commented on KAFKA-3238:


anyone home? :) would be great to have your inputs! thanks [~nehanarkhede] 
[~junrao] [~jkreps]

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
> at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
> at sun.nio.ch.IOUtil.write(IOUtil.java:148)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
> - locked <5ae2fc40> (a java.lang.Object)
> at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
> at
> kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
> 6
> )
> at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
> at
> kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
> .
> scala:26)
> at 

[jira] [Updated] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-12 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-3238:
---
Priority: Critical  (was: Blocker)

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
> at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
> at sun.nio.ch.IOUtil.write(IOUtil.java:148)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
> - locked <5ae2fc40> (a java.lang.Object)
> at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
> at
> kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
> 6
> )
> at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
> at
> kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
> .
> scala:26)
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> 

[jira] [Comment Edited] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-12 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145786#comment-15145786
 ] 

Rekha Joshi edited comment on KAFKA-3238 at 2/13/16 7:29 AM:
-

Hi,
 [~nehanarkhede] [~junrao] [~jkreps] :
I have seen few similar deadlock related issues on kafka - KAFKA-914 , 
KAFKA-702 and few open issues in similar vein.Looking forward to hearing your 
analysis/thoughts on this mirrormaker issue soon. thanks!
Thanks
Rekha


was (Author: rekhajoshm):
Hi,
 [~nehanarkhede] [~junrao] [~jkreps] :
I have seen few similar deadlock related issues on kafka - KAFKA-914 , 
KAFKA-702 and was wondering if it relates to datastructures used/locking 
pattern/JDK related. For eg: http://bugs.java.com/view_bug.do?bug_id=7011862 
Please let me know your analysis/thoughts on the mirrormaker issue here? thanks!
Thanks
Rekha

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at 

[jira] [Updated] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-12 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-3238:
---
Priority: Blocker  (was: Critical)

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Blocker
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
> at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
> at sun.nio.ch.IOUtil.write(IOUtil.java:148)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
> - locked <5ae2fc40> (a java.lang.Object)
> at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
> at
> kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
> 6
> )
> at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
> at
> kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
> .
> scala:26)
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> 

[jira] [Comment Edited] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-12 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145786#comment-15145786
 ] 

Rekha Joshi edited comment on KAFKA-3238 at 2/13/16 4:24 AM:
-

Hi,
 [~nehanarkhede] [~junrao] [~jkreps] :
I have seen few similar deadlock related issues on kafka - KAFKA-914 , 
KAFKA-702 and was wondering if it relates to datastructures used/locking 
pattern/JDK related. For eg: http://bugs.java.com/view_bug.do?bug_id=7011862 
Please let me know your analysis/thoughts on the mirrormaker issue here? thanks!
Thanks
Rekha


was (Author: rekhajoshm):
Hi,
 [~nehanarkhede] [~junrao] [~jkreps] :
I have seen few deadlock related issues on kafka - KAFKA-914 , KAFKA-702 and 
was wondering if it relates to datastructures used/locking pattern/JDK related. 
For eg: http://bugs.java.com/view_bug.do?bug_id=7011862 Please let me know your 
analysis/thoughts on the mirrormaker issue here? thanks!
Thanks
Rekha

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread 

[jira] [Commented] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-12 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145786#comment-15145786
 ] 

Rekha Joshi commented on KAFKA-3238:


Hi,
 [~nehanarkhede] [~junrao] [~jkreps] :
I have seen few deadlock related issues on kafka - KAFKA-914 , KAFKA-702 and 
was wondering if it relates to datastructures used/locking pattern/JDK related. 
For eg: http://bugs.java.com/view_bug.do?bug_id=7011862 Please let me know your 
analysis/thoughts on the mirrormaker issue here? thanks!
Thanks
Rekha

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
> at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
> at sun.nio.ch.IOUtil.write(IOUtil.java:148)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
> - locked <5ae2fc40> (a java.lang.Object)
> at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
> at
> 

[jira] [Comment Edited] (KAFKA-914) Deadlock between initial rebalance and watcher-triggered rebalances

2016-02-12 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145519#comment-15145519
 ] 

Rekha Joshi edited comment on KAFKA-914 at 2/12/16 11:16 PM:
-

Hi,

We have been seeing consistent issue mirroring between our DataCenters., and 
same issue seems to resurface.Below are the details.Is this concern really 
resolved? 

Thanks
Rekha

{code}
Source: AWS (13 Brokers)
Destination: OTHER-DC (20 Brokers)
Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
Connectivity: AWS Direct Connect (max 6Gbps)
Data details: Source is receiving 40,000 msg/sec, each message is around
5KB

Mirroring


Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
-XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
Launch script: kafka.tools.MirrorMaker --consumer.config
consumer.properties --producer.config producer.properties --num.producers
1 --whitelist mirrortest --num.streams 1 --queue.size 10

consumer.properties
---
zookeeper.connect=
group.id=KafkaMirror
auto.offset.reset=smallest
fetch.message.max.bytes=900
zookeeper.connection.timeout.ms=6
rebalance.max.retries=4
rebalance.backoff.ms=5000

producer.properties
--
metadata.broker.list=
partitioner.class=
producer.type=async
When we start the mirroring job everything works fine as expected,
Eventually we hit an issue where the job stops consuming no more.
At this stage:

1. No Error seen in the mirrormaker logs

2. consumer threads are not fetching any messages and we see thread dumps
as follows:

"ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
t@73
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <79b6d3ce> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
i
t(AbstractQueuedSynchronizer.java:2039)
at
java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
)
at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
at
kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
T
hread.scala:49)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
n
$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
n
$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
at
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
c
V$sp(AbstractFetcherThread.scala:109)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
b
stractFetcherThread.scala:109)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
b
stractFetcherThread.scala:109)
at kafka.utils.Utils$.inLock(Utils.scala:535)
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
e
ad.scala:108)
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

Locked ownable synchronizers:
- locked <199dc92d> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)

3. Producer stops producing, in trace mode we notice it's handling 0
events and Thread dump as follows:

"ProducerSendThread--0" - Thread t@53
  java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
at sun.nio.ch.IOUtil.write(IOUtil.java:148)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
- locked <5ae2fc40> (a java.lang.Object)
at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
at
kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
6
)
at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
at
kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
.
scala:26)
at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProdu
c
er.scala:72)
- locked <8489cd8> (a java.lang.Object)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
$
mcV$sp(SyncProducer.scala:103)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
(
SyncProducer.scala:103)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
(
SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at

[jira] [Commented] (KAFKA-914) Deadlock between initial rebalance and watcher-triggered rebalances

2016-02-12 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145519#comment-15145519
 ] 

Rekha Joshi commented on KAFKA-914:
---

Hi,

We have been seeing consistent issue mirroring between our DataCenters., and 
same issue seems to resurface.

Below is the setup details


Source: AWS (13 Brokers)
Destination: OTHER-DC (20 Brokers)
Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
Connectivity: AWS Direct Connect (max 6Gbps)
Data details: Source is receiving 40,000 msg/sec, each message is around
5KB

Mirroring


Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
-XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
Launch script: kafka.tools.MirrorMaker --consumer.config
consumer.properties --producer.config producer.properties --num.producers
1 --whitelist mirrortest --num.streams 1 --queue.size 10

consumer.properties
---
zookeeper.connect=
group.id=KafkaMirror
auto.offset.reset=smallest
fetch.message.max.bytes=900
zookeeper.connection.timeout.ms=6
rebalance.max.retries=4
rebalance.backoff.ms=5000

producer.properties
--
metadata.broker.list=
partitioner.class=
producer.type=async
When we start the mirroring job everything works fine as expected,
Eventually we hit an issue where the job stops consuming no more.
At this stage:

{code}
1. No Error seen in the mirrormaker logs

2. consumer threads are not fetching any messages and we see thread dumps
as follows:

"ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
t@73
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <79b6d3ce> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
i
t(AbstractQueuedSynchronizer.java:2039)
at
java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
)
at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
at
kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
T
hread.scala:49)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
n
$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
n
$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
at
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
c
V$sp(AbstractFetcherThread.scala:109)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
b
stractFetcherThread.scala:109)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
b
stractFetcherThread.scala:109)
at kafka.utils.Utils$.inLock(Utils.scala:535)
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
e
ad.scala:108)
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

Locked ownable synchronizers:
- locked <199dc92d> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)

3. Producer stops producing, in trace mode we notice it's handling 0
events and Thread dump as follows:

"ProducerSendThread--0" - Thread t@53
  java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
at sun.nio.ch.IOUtil.write(IOUtil.java:148)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
- locked <5ae2fc40> (a java.lang.Object)
at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
at
kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
6
)
at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
at
kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
.
scala:26)
at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProdu
c
er.scala:72)
- locked <8489cd8> (a java.lang.Object)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
$
mcV$sp(SyncProducer.scala:103)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
(
SyncProducer.scala:103)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
(
SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at

[jira] [Updated] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-12 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-3238:
---
Priority: Critical  (was: Major)

> Deadlock Mirrormaker consumer not fetching any messages
> ---
>
> Key: KAFKA-3238
> URL: https://issues.apache.org/jira/browse/KAFKA-3238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Rekha Joshi
>Priority: Critical
>
> Hi,
> We have been seeing consistent issue mirroring between our DataCenters 
> happening randomly.Below are the details.
> Thanks
> Rekha
> {code}
> Source: AWS (13 Brokers)
> Destination: OTHER-DC (20 Brokers)
> Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
> Connectivity: AWS Direct Connect (max 6Gbps)
> Data details: Source is receiving 40,000 msg/sec, each message is around
> 5KB
> Mirroring
> 
> Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
> JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35
> Launch script: kafka.tools.MirrorMaker --consumer.config
> consumer.properties --producer.config producer.properties --num.producers
> 1 --whitelist mirrortest --num.streams 1 --queue.size 10
> consumer.properties
> ---
> zookeeper.connect=
> group.id=KafkaMirror
> auto.offset.reset=smallest
> fetch.message.max.bytes=900
> zookeeper.connection.timeout.ms=6
> rebalance.max.retries=4
> rebalance.backoff.ms=5000
> producer.properties
> --
> metadata.broker.list=
> partitioner.class=
> producer.type=async
> When we start the mirroring job everything works fine as expected,
> Eventually we hit an issue where the job stops consuming no more.
> At this stage:
> 1. No Error seen in the mirrormaker logs
> 2. consumer threads are not fetching any messages and we see thread dumps
> as follows:
> "ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
> t@73
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <79b6d3ce> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
> i
> t(AbstractQueuedSynchronizer.java:2039)
> at
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
> )
> at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
> T
> hread.scala:49)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
> n
> $apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
> at
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
> c
> V$sp(AbstractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
> b
> stractFetcherThread.scala:109)
> at kafka.utils.Utils$.inLock(Utils.scala:535)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
> e
> ad.scala:108)
> at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> Locked ownable synchronizers:
> - locked <199dc92d> (a
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> 3. Producer stops producing, in trace mode we notice it's handling 0
> events and Thread dump as follows:
> "ProducerSendThread--0" - Thread t@53
>   java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
> at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
> at sun.nio.ch.IOUtil.write(IOUtil.java:148)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
> - locked <5ae2fc40> (a java.lang.Object)
> at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
> at
> kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
> 6
> )
> at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
> at
> kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
> .
> scala:26)
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)
> at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> at
> 

[jira] [Comment Edited] (KAFKA-914) Deadlock between initial rebalance and watcher-triggered rebalances

2016-02-12 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145519#comment-15145519
 ] 

Rekha Joshi edited comment on KAFKA-914 at 2/13/16 1:20 AM:


Hi,
Facing similar issue; raised in https://issues.apache.org/jira/browse/KAFKA-3238
Thanks
Rekha



was (Author: rekhajoshm):
Hi,

We have been seeing consistent issue mirroring between our DataCenters., and 
same issue seems to resurface.Below are the details.Is this concern really 
resolved? 

Thanks
Rekha

{code}
Source: AWS (13 Brokers)
Destination: OTHER-DC (20 Brokers)
Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
Connectivity: AWS Direct Connect (max 6Gbps)
Data details: Source is receiving 40,000 msg/sec, each message is around
5KB

Mirroring


Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
-XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
Launch script: kafka.tools.MirrorMaker --consumer.config
consumer.properties --producer.config producer.properties --num.producers
1 --whitelist mirrortest --num.streams 1 --queue.size 10

consumer.properties
---
zookeeper.connect=
group.id=KafkaMirror
auto.offset.reset=smallest
fetch.message.max.bytes=900
zookeeper.connection.timeout.ms=6
rebalance.max.retries=4
rebalance.backoff.ms=5000

producer.properties
--
metadata.broker.list=
partitioner.class=
producer.type=async
When we start the mirroring job everything works fine as expected,
Eventually we hit an issue where the job stops consuming no more.
At this stage:

1. No Error seen in the mirrormaker logs

2. consumer threads are not fetching any messages and we see thread dumps
as follows:

"ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
t@73
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <79b6d3ce> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
i
t(AbstractQueuedSynchronizer.java:2039)
at
java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
)
at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
at
kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
T
hread.scala:49)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
n
$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
n
$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
at
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
c
V$sp(AbstractFetcherThread.scala:109)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
b
stractFetcherThread.scala:109)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
b
stractFetcherThread.scala:109)
at kafka.utils.Utils$.inLock(Utils.scala:535)
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
e
ad.scala:108)
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

Locked ownable synchronizers:
- locked <199dc92d> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)

3. Producer stops producing, in trace mode we notice it's handling 0
events and Thread dump as follows:

"ProducerSendThread--0" - Thread t@53
  java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
at sun.nio.ch.IOUtil.write(IOUtil.java:148)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
- locked <5ae2fc40> (a java.lang.Object)
at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
at
kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
6
)
at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
at
kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
.
scala:26)
at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProdu
c
er.scala:72)
- locked <8489cd8> (a java.lang.Object)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
$
mcV$sp(SyncProducer.scala:103)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
(
SyncProducer.scala:103)
at

[jira] [Created] (KAFKA-3238) Deadlock Mirrormaker consumer not fetching any messages

2016-02-12 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-3238:
--

 Summary: Deadlock Mirrormaker consumer not fetching any messages
 Key: KAFKA-3238
 URL: https://issues.apache.org/jira/browse/KAFKA-3238
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Rekha Joshi


Hi,

We have been seeing consistent issue mirroring between our DataCenters 
happening randomly.Below are the details.

Thanks
Rekha

{code}
Source: AWS (13 Brokers)
Destination: OTHER-DC (20 Brokers)
Topic: mirrortest (source: 260 partitions, destination: 200 partitions)
Connectivity: AWS Direct Connect (max 6Gbps)
Data details: Source is receiving 40,000 msg/sec, each message is around
5KB

Mirroring


Node: is at our OTHER-DC (24 Cores, 48 GB Memory)
JVM Details: 1.8.0_51, -Xmx8G -Xms8G -XX:PermSize=96m -XX:MaxPermSize=96m
-XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
Launch script: kafka.tools.MirrorMaker --consumer.config
consumer.properties --producer.config producer.properties --num.producers
1 --whitelist mirrortest --num.streams 1 --queue.size 10

consumer.properties
---
zookeeper.connect=
group.id=KafkaMirror
auto.offset.reset=smallest
fetch.message.max.bytes=900
zookeeper.connection.timeout.ms=6
rebalance.max.retries=4
rebalance.backoff.ms=5000

producer.properties
--
metadata.broker.list=
partitioner.class=
producer.type=async
When we start the mirroring job everything works fine as expected,
Eventually we hit an issue where the job stops consuming no more.
At this stage:

1. No Error seen in the mirrormaker logs

2. consumer threads are not fetching any messages and we see thread dumps
as follows:

"ConsumerFetcherThread-KafkaMirror_1454375262613-7b760d87-0-26" - Thread
t@73
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <79b6d3ce> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awa
i
t(AbstractQueuedSynchronizer.java:2039)
at
java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350
)
at kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
at
kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcher
T
hread.scala:49)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
n
$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:128)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfu
n
$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:109)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
at
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$m
c
V$sp(AbstractFetcherThread.scala:109)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
b
stractFetcherThread.scala:109)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(A
b
stractFetcherThread.scala:109)
at kafka.utils.Utils$.inLock(Utils.scala:535)
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThr
e
ad.scala:108)
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

Locked ownable synchronizers:
- locked <199dc92d> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)

3. Producer stops producing, in trace mode we notice it's handling 0
events and Thread dump as follows:

"ProducerSendThread--0" - Thread t@53
  java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
at sun.nio.ch.IOUtil.write(IOUtil.java:148)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504)
- locked <5ae2fc40> (a java.lang.Object)
at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
at
kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:5
6
)
at kafka.network.Send$class.writeCompletely(Transmission.scala:75)
at
kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend
.
scala:26)
at kafka.network.BlockingChannel.send(BlockingChannel.scala:103)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProdu
c
er.scala:72)
- locked <8489cd8> (a java.lang.Object)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
$
mcV$sp(SyncProducer.scala:103)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
(
SyncProducer.scala:103)
at
kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply
(
SyncProducer.scala:103)
at 

[jira] [Resolved] (KAFKA-1284) Received -1 when reading from channel, socket has likely been closed.

2015-04-27 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi resolved KAFKA-1284.

Resolution: Cannot Reproduce

 Received -1 when reading from channel, socket has likely been closed.
 -

 Key: KAFKA-1284
 URL: https://issues.apache.org/jira/browse/KAFKA-1284
 Project: Kafka
  Issue Type: Bug
Reporter: darion yaphets
Assignee: Rekha Joshi
Priority: Trivial

 I use kafka 0.7.2 as storm data source (0.8.1)   and storm-kafka  maybe a 
 useful spout to feed data into storm 
 but  storm report  Received -1 when reading from channel, socket has likely 
 been closed. at storm spout in open()   . It's confuse and any one could help 
 me . Thanks a lot ~~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1284) Received -1 when reading from channel, socket has likely been closed.

2015-04-27 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14515069#comment-14515069
 ] 

Rekha Joshi commented on KAFKA-1284:


Hi [~darion]
The -1 error will be thrown only if the channel is closed, has reached 
end-of-stream -  
http://docs.oracle.com/javase/7/docs/api/java/nio/channels/ReadableByteChannel.html
Please check your configurations are correct, nimbus, zk are good at storm end, 
and there are no firewall issues? I use later versions of kafka storm for real 
time data ingestion pipeline and they work fine.If you could provide complete 
stack trace post confirming all the configs/ps are good? Maybe it could be 
older versions issue.Also it might help you to check 
https://github.com/miguno/kafka-storm-starter Thanks.

 Received -1 when reading from channel, socket has likely been closed.
 -

 Key: KAFKA-1284
 URL: https://issues.apache.org/jira/browse/KAFKA-1284
 Project: Kafka
  Issue Type: Bug
Reporter: darion yaphets
Assignee: Rekha Joshi
Priority: Trivial

 I use kafka 0.7.2 as storm data source (0.8.1)   and storm-kafka  maybe a 
 useful spout to feed data into storm 
 but  storm report  Received -1 when reading from channel, socket has likely 
 been closed. at storm spout in open()   . It's confuse and any one could help 
 me . Thanks a lot ~~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-1284) Received -1 when reading from channel, socket has likely been closed.

2015-04-27 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-1284 started by Rekha Joshi.
--
 Received -1 when reading from channel, socket has likely been closed.
 -

 Key: KAFKA-1284
 URL: https://issues.apache.org/jira/browse/KAFKA-1284
 Project: Kafka
  Issue Type: Bug
Reporter: darion yaphets
Assignee: Rekha Joshi
Priority: Trivial

 I use kafka 0.7.2 as storm data source (0.8.1)   and storm-kafka  maybe a 
 useful spout to feed data into storm 
 but  storm report  Received -1 when reading from channel, socket has likely 
 been closed. at storm spout in open()   . It's confuse and any one could help 
 me . Thanks a lot ~~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1621) Standardize --messages option in perf scripts

2015-04-27 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14515003#comment-14515003
 ] 

Rekha Joshi commented on KAFKA-1621:


No worries [~nehanarkhede] Thanks for your review, PerfConfig was found updated 
in trunk for REQUIRED, rest applied on pull #58.Thanks.

 Standardize --messages option in perf scripts
 -

 Key: KAFKA-1621
 URL: https://issues.apache.org/jira/browse/KAFKA-1621
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Jay Kreps
  Labels: newbie

 This option is specified in PerfConfig and is used by the producer, consumer 
 and simple consumer perf commands. The docstring on the argument does not 
 list it as required but the producer performance test requires it--others 
 don't.
 We should standardize this so that either all the commands require the option 
 and it is marked as required in the docstring or none of them list it as 
 required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1284) Received -1 when reading from channel, socket has likely been closed.

2015-04-27 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-1284:
--

Assignee: Rekha Joshi

 Received -1 when reading from channel, socket has likely been closed.
 -

 Key: KAFKA-1284
 URL: https://issues.apache.org/jira/browse/KAFKA-1284
 Project: Kafka
  Issue Type: Bug
Reporter: darion yaphets
Assignee: Rekha Joshi
Priority: Trivial

 I use kafka 0.7.2 as storm data source (0.8.1)   and storm-kafka  maybe a 
 useful spout to feed data into storm 
 but  storm report  Received -1 when reading from channel, socket has likely 
 been closed. at storm spout in open()   . It's confuse and any one could help 
 me . Thanks a lot ~~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1508) Scripts Break When Path Has Spaces

2015-04-27 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-1508:
--

Assignee: Rekha Joshi

 Scripts Break When Path Has Spaces
 --

 Key: KAFKA-1508
 URL: https://issues.apache.org/jira/browse/KAFKA-1508
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.8.1.1
 Environment: Any *nix flavor where the full path name to the Kafka 
 deployment contains spaces
Reporter: Tim Olson
Assignee: Rekha Joshi
Priority: Minor

 All the shell scripts in {{bin}} use the idom
 {{$(dirname $0)}}
 but this produces the error
 {{usage: dirname path}}
 if the path contains spaces.  The correct way to get the dirname is to use:
 {{$(dirname $0)}}
 and subsequently wrap the result in quotes when it is used.  For example, the 
 file {{bin/kafka-run-class.sh}} should look like this starting line 23:
 {code}
 # BUGFIX: quotes added
 base_dir=$(dirname $0)/..
 # create logs directory
 # BUGFIX: quotes added
 LOG_DIR=$base_dir/logs
 if [ ! -d $LOG_DIR ]; then
 # BUGFIX: quotes added around $LOG_DIR
 mkdir $LOG_DIR
 fi
 # ...
 # BUGFIX: quotes added
 for file in $base_dir/core/build/dependant-libs-${SCALA_VERSION}/*.jar;
 do
   CLASSPATH=$CLASSPATH:$file
 done
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2098) Gradle Wrapper Jar gone missing in 0.8.2.1

2015-04-16 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14497644#comment-14497644
 ] 

Rekha Joshi commented on KAFKA-2098:


Thanks for a prompt reply [~jkreps].What works for you works for me :-) But 
just for my curiosity, can you please share link to this Apache requirement? As 
Samza is also a top level Apache project and has 
https://github.com/apache/samza/tree/master/gradle/wrapper 
Thanks!

 Gradle Wrapper Jar gone missing in 0.8.2.1
 --

 Key: KAFKA-2098
 URL: https://issues.apache.org/jira/browse/KAFKA-2098
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.2.1
Reporter: Rekha Joshi
Assignee: Rekha Joshi

 ./gradlew idea
 Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain
 This was working in 0.8.2.Attaching patch.Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2098) Gradle Wrapper Jar gone missing in 0.8.2.1

2015-04-15 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi reassigned KAFKA-2098:
--

Assignee: Rekha Joshi

 Gradle Wrapper Jar gone missing in 0.8.2.1
 --

 Key: KAFKA-2098
 URL: https://issues.apache.org/jira/browse/KAFKA-2098
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.2.1
Reporter: Rekha Joshi
Assignee: Rekha Joshi

 ./gradlew idea
 Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain
 This was working in 0.8.2.Attaching patch.Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2098) Gradle Wrapper Jar gone missing in 0.8.2.1

2015-04-15 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14497350#comment-14497350
 ] 

Rekha Joshi commented on KAFKA-2098:


Hi.I agree [~alexcb] people all over the world are losing time on it :), though 
I also understand where [~gwenshap] is coming from.

For users point of view better to have it in, especially as many standard open 
source based on gradle have them, one that comes to mind immediately is Samza - 
https://github.com/apache/samza/tree/master/gradle/wrapper

Anyhow I will let Kafka committers and [~jkreps] decide whats works best.

Thanks
Rekha

 Gradle Wrapper Jar gone missing in 0.8.2.1
 --

 Key: KAFKA-2098
 URL: https://issues.apache.org/jira/browse/KAFKA-2098
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.2.1
Reporter: Rekha Joshi
Assignee: Rekha Joshi

 ./gradlew idea
 Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain
 This was working in 0.8.2.Attaching patch.Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1621) Standardize --messages option in perf scripts

2015-04-06 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481865#comment-14481865
 ] 

Rekha Joshi commented on KAFKA-1621:


Hi.missed updating earlier. everything else was setup as 
https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review 
but the jira-client could not be installed (Mac 10.9.5, Python 2.7.5, pip 
1.5.6), hence could not update review board.
suggestion: why not review directly on git as rest of the apache projects?

python kafka-patch-review.py --help
Traceback (most recent call last):
  File kafka-patch-review.py, line 10, in module
from jira.client import JIRA
ImportError: No module named jira.client

pip install jira-python
Downloading/unpacking jira-python
  Could not find any downloads that satisfy the requirement jira-python
Cleaning up...
No distributions at all found for jira-python
Storing debug log for failure in /usr/local/.pip/pip.log

sudo easy_install jira-python
Searching for jira-python
Reading http://pypi.python.org/simple/jira-python/
Couldn't find index page for 'jira-python' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading http://pypi.python.org/simple/
No local packages or download links found for jira-python
error: Could not find suitable distribution for Requirement.parse('jira-python')

Think it could be related to security enabled on pip?, but I would prefer not 
to downgrade pip as used for my other projects as well.

 Standardize --messages option in perf scripts
 -

 Key: KAFKA-1621
 URL: https://issues.apache.org/jira/browse/KAFKA-1621
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Jay Kreps
  Labels: newbie

 This option is specified in PerfConfig and is used by the producer, consumer 
 and simple consumer perf commands. The docstring on the argument does not 
 list it as required but the producer performance test requires it--others 
 don't.
 We should standardize this so that either all the commands require the option 
 and it is marked as required in the docstring or none of them list it as 
 required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2098) Gradle Wrapper Jar gone missing in 0.8.2.1

2015-04-06 Thread Rekha Joshi (JIRA)
Rekha Joshi created KAFKA-2098:
--

 Summary: Gradle Wrapper Jar gone missing in 0.8.2.1
 Key: KAFKA-2098
 URL: https://issues.apache.org/jira/browse/KAFKA-2098
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.2.1
Reporter: Rekha Joshi


./gradlew idea
Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain

This was working in 0.8.2.Attaching patch.Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1995) JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but hit Kafka)

2015-03-05 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349493#comment-14349493
 ] 

Rekha Joshi commented on KAFKA-1995:


Great.Thanks for your reply, [~ewencp] [~joestein] !

 JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but 
 hit Kafka)
 

 Key: KAFKA-1995
 URL: https://issues.apache.org/jira/browse/KAFKA-1995
 Project: Kafka
  Issue Type: Wish
  Components: core
Affects Versions: 0.8.3
Reporter: Rekha Joshi

 Kafka is a great alternative to JMS, providing high performance, throughput 
 as scalable, distributed pub sub/commit log service.
 However there always exist traditional systems running on JMS.
 Rather than rewriting, it would be great if we just had an inbuilt 
 JMSAdaptor/JMSProxy/JMSBridge by which client can speak JMS but hit Kafka 
 behind-the-scene.
 Something like Chukwa's o.a.h.chukwa.datacollection.adaptor.jms.JMSAdaptor, 
 which receives msg off JMS queue and transforms to a Chukwa chunk?
 I have come across folks talking of this need in past as well.Is it 
 considered and/or part of the roadmap?
 http://grokbase.com/t/kafka/users/131cst8xpv/stomp-binding-for-kafka
 http://grokbase.com/t/kafka/users/148dm4247q/consuming-messages-from-kafka-and-pushing-on-to-a-jms-queue
 http://grokbase.com/t/kafka/users/143hjepbn2/request-kafka-zookeeper-jms-details
 Looking for inputs on correct way to approach this so to retain all good 
 features of Kafka while still not rewriting entire application.Possible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1995) JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but hit Kafka)

2015-03-01 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14342819#comment-14342819
 ] 

Rekha Joshi commented on KAFKA-1995:


[~jkreps] [~nehanarkhede] [~junrao] [~joestein] it would be great to know your 
thoughts on it.thanks.

 JMS to Kafka: Inbuilt JMSAdaptor/JMSProxy/JMSBridge (Client can speak JMS but 
 hit Kafka)
 

 Key: KAFKA-1995
 URL: https://issues.apache.org/jira/browse/KAFKA-1995
 Project: Kafka
  Issue Type: Wish
  Components: core
Affects Versions: 0.8.3
Reporter: Rekha Joshi

 Kafka is a great alternative to JMS, providing high performance, throughput 
 as scalable, distributed pub sub/commit log service.
 However there always exist traditional systems running on JMS.
 Rather than rewriting, it would be great if we just had an inbuilt 
 JMSAdaptor/JMSProxy/JMSBridge by which client can speak JMS but hit Kafka 
 behind-the-scene.
 Something like Chukwa's o.a.h.chukwa.datacollection.adaptor.jms.JMSAdaptor, 
 which receives msg off JMS queue and transforms to a Chukwa chunk?
 I have come across folks talking of this need in past as well.Is it 
 considered and/or part of the roadmap?
 http://grokbase.com/t/kafka/users/131cst8xpv/stomp-binding-for-kafka
 http://grokbase.com/t/kafka/users/148dm4247q/consuming-messages-from-kafka-and-pushing-on-to-a-jms-queue
 http://grokbase.com/t/kafka/users/143hjepbn2/request-kafka-zookeeper-jms-details
 Looking for inputs on correct way to approach this so to retain all good 
 features of Kafka while still not rewriting entire application.Possible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1621) Standardize --messages option in perf scripts

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-1621:
---
Reviewer: Jay Kreps
  Status: Patch Available  (was: Open)

https://github.com/apache/kafka/pull/46

 Standardize --messages option in perf scripts
 -

 Key: KAFKA-1621
 URL: https://issues.apache.org/jira/browse/KAFKA-1621
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Jay Kreps
  Labels: newbie

 This option is specified in PerfConfig and is used by the producer, consumer 
 and simple consumer perf commands. The docstring on the argument does not 
 list it as required but the producer performance test requires it--others 
 don't.
 We should standardize this so that either all the commands require the option 
 and it is marked as required in the docstring or none of them list it as 
 required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-269) ./system_test/producer_perf/bin/run-test.sh without --async flag does not run

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-269:
--
 Reviewer: Jay Kreps
Affects Version/s: 0.8.1.1
   Status: Patch Available  (was: Open)

[~praveen27] 
By default if no  --sync option is provided the producer run is in async mode.

Ran the run-test.sh without --async option and works on 0.8.2.or maybe you can 
check your zk/topic/reporting-interval?
start producing 200 messages ...
start.time, end.time, compression, message.size, batch.size, 
total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/Users/rjoshi2/Documents/code/kafka-fork/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/Users/rjoshi2/Documents/code/kafka-fork/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-02-22 17:35:18:471, 2015-02-22 17:35:24:331, 0, 200, 200, 381.47, 65.0972, 
200, 341296.9283
wait for data to be persisted

The patch as, AFAIU, 'async' is not a recognized option.Only - -sync is.
Thanks.

 ./system_test/producer_perf/bin/run-test.sh  without --async flag does not run
 --

 Key: KAFKA-269
 URL: https://issues.apache.org/jira/browse/KAFKA-269
 Project: Kafka
  Issue Type: Bug
  Components: clients, core
Affects Versions: 0.8.1.1, 0.7
 Environment: Linux 2.6.18-238.1.1.el5 , x86_64 x86_64 x86_64 GNU/Linux
 ext3 file system with raid10 
Reporter: Praveen Ramachandra
  Labels: newbie, performance

 When I run the tests without --async option, The tests doesn't produce even a 
 single message. 
 Following defaults where changed in the server.properties
 num.threads=Tried with 8, 10, 100
 num.partitions=10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1545) java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on some irregular hostnames

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-1545:
---
 Reviewer: Guozhang Wang
Affects Version/s: 0.8.1.1
   Status: Patch Available  (was: Open)

 java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on 
 some irregular hostnames
 ---

 Key: KAFKA-1545
 URL: https://issues.apache.org/jira/browse/KAFKA-1545
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1
Reporter: Guozhang Wang
Assignee: Guozhang Wang
  Labels: newbie
 Fix For: 0.9.0


 For example:
 kafka.server.LogOffsetTest  testGetOffsetsForUnknownTopic FAILED
 java.net.UnknownHostException: guwang-mn2: guwang-mn2: nodename nor 
 servname provided, or not known
 at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
 at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:59)
 at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:121)
 at kafka.utils.TestUtils$.createServer(TestUtils.scala:130)
 at kafka.server.LogOffsetTest.setUp(LogOffsetTest.scala:53)
 Caused by:
 java.net.UnknownHostException: guwang-mn2: nodename nor servname 
 provided, or not known
 at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
 at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
 at 
 java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
 at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
 ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-724) Allow automatic socket.send.buffer from operating system

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-724:
--
 Reviewer: Jay Kreps
Affects Version/s: 0.8.2.0
   Status: Patch Available  (was: Open)

 Allow automatic socket.send.buffer from operating system
 

 Key: KAFKA-724
 URL: https://issues.apache.org/jira/browse/KAFKA-724
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.2.0
Reporter: Pablo Barrera
  Labels: newbie

 To do this, don't call to socket().setXXXBufferSize. This can be 
 controlled by the configuration parameter: if the value socket.send.buffer or 
 others are set to -1, don't call to socket().setXXXBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)