Re: [jira] [Commented] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Tong Li

Jun Rao,
Thanks for taking time reviewing the patch. Would like to confirm
here is what you suggested to do:

1. Utils.abs will be renamed to Utils.toPositive
2. Make changes in these methods of class DefaultPartitioner and
ByteArrayPartitioner so that toPositive gets called in partition method.

def partition(key: Any, numPartitions: Int): Int = {
Utils.abs(key.hashCode) % numPartitions
}


I can certainly make the above changes.  however method abs gets used
many other places and making the case worse is that there are two abs
methods both in core and clients. I found this problem when I was in an
effort of addressing issue 1926. For now, what if I keep that method in
clients, remove the one in core Utils module and making changes to all
other modules where abs method gets called so that all use of abs method
will point to the clients Utils.abs instead of two locations?

Thanks.

Tong Li
OpenStack & Kafka Community Development
Building 501/B205
liton...@us.ibm.com

"Jun Rao (JIRA)"  wrote on 02/27/2015 12:43:04 AM:

> From: "Jun Rao (JIRA)" 
> To: dev@kafka.apache.org
> Date: 02/27/2015 12:43 AM
> Subject: [jira] [Commented] (KAFKA-1988)
> org.apache.kafka.common.utils.Utils.abs method returns wrong value
> for negative numbers.
>
>
> [ https://issues.apache.org/jira/browse/KAFKA-1988?
> page=com.atlassian.jira.plugin.system.issuetabpanels:comment-
> tabpanel&focusedCommentId=14339751#comment-14339751 ]
>
> Jun Rao commented on KAFKA-1988:
> 
>
> Actually, we can probably keep the same logic in determining the
> partition from key in Partitioner. We  can move the current code in
> abs() into Partitioner, and give it a new name like toPositive().
> Then we can replace Utils.abs() with toPositive() it in the
> following code in Partitioner.
>
> // hash the key to choose a partition
> return Utils.abs(Utils.murmur2(key)) % numPartitions;
>
> This way, we will preserve the key distribution of existing users of
> the new producer.
>
> Tong,
>
> Do you want to submit a new patch based on that?
>
>
>
> > org.apache.kafka.common.utils.Utils.abs method returns wrong value
> for negative numbers.
> >
>


> >
> > Key: KAFKA-1988
> > URL: https://issues.apache.org/jira/browse/KAFKA-1988
> > Project: Kafka
> >  Issue Type: Bug
> >Affects Versions: 0.8.2.0
> >Reporter: Tong Li
> >Assignee: Tong Li
> >Priority: Blocker
> > Fix For: 0.8.2.1
> >
> > Attachments: KAFKA-1988.patch
> >
> >
> > org.apache.kafka.common.utils.Utils.abs method returns wrong value
> for negative numbers. The method only returns intended value for
> positive numbers. All negative numbers except the Integer.Min_Value
> will be returned an unsigned integer.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>

Review Request 31533: Patch for KAFKA-1988

2015-02-27 Thread Tong Li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31533/
---

Review request for kafka.


Bugs: KAFKA-1988
https://issues.apache.org/jira/browse/KAFKA-1988


Repository: kafka


Description
---

KAFKA-1988 abs method is not returning right values for negative numbers


Diffs
-

  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
69530c187cd1c41b8173b61de6f982aafe65c9fe 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
4c2ea34815b63174732d58b699e1a0a9e6ec3b6f 
  core/src/main/scala/kafka/log/OffsetMap.scala 
42cdfbb6100b5c89d86144f92f661ebd844b2132 
  core/src/main/scala/kafka/producer/ByteArrayPartitioner.scala 
6a3b02e414eb7d62cfaece3344245feac54cecda 
  core/src/main/scala/kafka/producer/DefaultPartitioner.scala 
3afb22eeb4e3b8ecf49e92bd167f2f67b8f6a961 
  core/src/main/scala/kafka/producer/async/DefaultEventHandler.scala 
821901e4f434dfd9eec6eceabfc2e1e65507a57c 
  core/src/main/scala/kafka/server/AbstractFetcherManager.scala 
20c00cb8cc2351950edbc8cb1752905a0c26e79f 
  core/src/main/scala/kafka/tools/MirrorMaker.scala 
5374280dc97dc8e01e9b3ba61fd036dc13ae48cb 
  core/src/main/scala/kafka/utils/Utils.scala 
738c1af9ef5de16fdf5130daab69757a14c48b5c 
  core/src/test/scala/unit/kafka/utils/UtilsTest.scala 
8c3797a964a2760a00ed60530d1a48e3da473769 

Diff: https://reviews.apache.org/r/31533/diff/


Testing
---


Thanks,

Tong Li



[jira] [Updated] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Tong Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tong Li updated KAFKA-1988:
---
Attachment: KAFKA-1988.patch

> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers.
> 
>
> Key: KAFKA-1988
> URL: https://issues.apache.org/jira/browse/KAFKA-1988
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1988.patch, KAFKA-1988.patch
>
>
> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers. The method only returns intended value for positive 
> numbers. All negative numbers except the Integer.Min_Value will be returned 
> an unsigned integer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Tong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339921#comment-14339921
 ] 

Tong Li commented on KAFKA-1988:


Created reviewboard https://reviews.apache.org/r/31533/diff/
 against branch origin/trunk

> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers.
> 
>
> Key: KAFKA-1988
> URL: https://issues.apache.org/jira/browse/KAFKA-1988
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1988.patch, KAFKA-1988.patch
>
>
> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers. The method only returns intended value for positive 
> numbers. All negative numbers except the Integer.Min_Value will be returned 
> an unsigned integer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Tong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339924#comment-14339924
 ] 

Tong Li commented on KAFKA-1988:


Jun Rao,
Thanks for taking time reviewing the patch. Would like to confirm here 
is what you suggested to do:

1. Utils.abs will be renamed to Utils.toPositive
2. Make changes in these methods of class DefaultPartitioner and 
ByteArrayPartitioner so that toPositive gets called in partition method.

def partition(key: Any, numPartitions: Int): Int = {
Utils.abs(key.hashCode) % numPartitions
}


I can certainly make the above changes.  however method abs gets used 
many other places and making the case worse is that there are two abs methods 
both in core and clients. I found this problem when I was in an effort of 
addressing issue 1926. For now, what if I keep that method in clients, remove 
the one in core Utils module and making changes to all other modules where abs 
method gets called so that all use of abs method will point to the clients 
Utils.abs instead of two locations?

The patch set reflects the above thoughts is here 
https://reviews.apache.org/r/31533/diff/

> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers.
> 
>
> Key: KAFKA-1988
> URL: https://issues.apache.org/jira/browse/KAFKA-1988
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1988.patch, KAFKA-1988.patch
>
>
> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers. The method only returns intended value for positive 
> numbers. All negative numbers except the Integer.Min_Value will be returned 
> an unsigned integer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1460) NoReplicaOnlineException: No replica for partition

2015-02-27 Thread Po Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14338303#comment-14338303
 ] 

Po Zhou edited comment on KAFKA-1460 at 2/27/15 11:22 AM:
--

I occasionally encountered the issue with the following error log:

ERROR Controller 4 epoch 61 initiated state change for partition 
[.x.xx.x,1] from OfflinePartition to OnlinePartition failed 
(state.change.logger)
kafka.common.NoReplicaOnlineException: No replica for partition 
[.x.xx.x,1] is alive. Live brokers are: [Set(1)], Assigned 
replicas are: [List(2, 3)]
at 
kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
at 
kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:357)
at 
kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:206)
at 
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
at 
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
at scala.collection.Iterator$class.foreach(Iterator.scala:772)
at 
scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
at 
kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
at 
kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:446)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:373)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:35
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
at kafka.utils.Utils$.inLock(Utils.scala:535)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:568)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)

My Kafka is deployed on 4 servers, and booting with parameters: "--partitions 4 
--replication-factor 2". The issue happens in "OfflinePartitionLeaderSelector". 
According the following source code, "If no broker in the assigned replica list 
is alive, it throws NoReplicaOnlineException", and more detailed cause is 
"liveAssignedReplicasToThisPartition.isEmpty". But how to avoid / resolve the 
exception remains unknown and the issue remains "Unresolved".

https://apache.googlesource.com/kafka/+/855340a2e65ffbb79520c49d0b9a231b94acd538/core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala


was (Author: zhoupo):
I occasionally encountered the issue with the following error log:
kafka.common.NoReplicaOnlineException: No replica for partition 
[.x.xx.x,1] is alive. Live brokers are: [Set(1)], Assigned 
replicas are: [List(2, 3)]
at 
kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
at 
kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:357)
at 
kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:206)
at 
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
at 
kafka.controller.PartitionStateMachine$$anonfun$trigge

[jira] [Commented] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-02-27 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340162#comment-14340162
 ] 

Sriharsha Chintalapani commented on KAFKA-1866:
---

[~nehanarkhede] I updated the patch can you please take a look. Thanks.

> LogStartOffset gauge throws exceptions after log.delete()
> -
>
> Key: KAFKA-1866
> URL: https://issues.apache.org/jira/browse/KAFKA-1866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1866.patch, KAFKA-1866_2015-02-10_22:50:09.patch, 
> KAFKA-1866_2015-02-11_09:25:33.patch
>
>
> The LogStartOffset gauge does "logSegments.head.baseOffset", which throws 
> NoSuchElementException on an empty list, which can occur after a delete() of 
> the log. This makes life harder for custom MetricsReporters, since they have 
> to deal with .value() possibly throwing an exception.
> Locally we're dealing with this by having Log.delete() also call removeMetric 
> on all the gauges. That also has the benefit of not having a bunch of metrics 
> floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1852) OffsetCommitRequest can commit offset on unknown topic

2015-02-27 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340163#comment-14340163
 ] 

Sriharsha Chintalapani commented on KAFKA-1852:
---

[~jjkoshy] Updated the patch as per your suggestions can you please take a 
look.Thanks

> OffsetCommitRequest can commit offset on unknown topic
> --
>
> Key: KAFKA-1852
> URL: https://issues.apache.org/jira/browse/KAFKA-1852
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1852.patch, KAFKA-1852_2015-01-19_10:44:01.patch, 
> KAFKA-1852_2015-02-12_16:46:10.patch, KAFKA-1852_2015-02-16_13:21:46.patch, 
> KAFKA-1852_2015-02-18_13:13:17.patch
>
>
> Currently, we allow an offset to be committed to Kafka, even when the 
> topic/partition for the offset doesn't exist. We probably should disallow 
> that and send an error back in that case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1877) Expose version via JMX for 'new' producer

2015-02-27 Thread Rick Kellogg (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340257#comment-14340257
 ] 

Rick Kellogg commented on KAFKA-1877:
-

Please be aware that exposure of version information can sometimes cause 
security certification problems.  Individuals use this information to target 
security holes.  So as a consequence, there should be a way to disable this 
information.

> Expose version via JMX for 'new' producer 
> --
>
> Key: KAFKA-1877
> URL: https://issues.apache.org/jira/browse/KAFKA-1877
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.8.2.0
>Reporter: Vladimir Tretyakov
>Assignee: Manikumar Reddy
> Fix For: 0.8.3
>
>
> Add version of Kafka to jmx (monitoring tool can use this info).
> Something like that
> {code}
> kafka.common:type=AppInfo,name=Version
>   Value java.lang.Object = 0.8.2-beta
> {code}
> we already have this in "core" Kafka module (see kafka.common.AppInfo object).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-02-27 Thread Jiangjie Qin
Hi Jay,

I just modified the KIP. The only concern I have about this change is that
it will break existing deployments. And we need to change the command line
arguments format for other tools as well. It is defiitely better that we
conform to the unix standard. It is just I am not sure if the change worth
it given we have been using this argument format for a while.

Jiangjie (Becket) Qin

On 2/26/15, 8:40 PM, "Jay Kreps"  wrote:

>Can we change the command line arguments for mm to match the command line
>arguments elsewhere. This proposal seems to have two formats:
>*--consumer.rebalance.listener*
>and
>*--abortOnSendFail*
>The '.' separators for command line options predate this JIRA but I think
>the new camelCase option is a new invention. All the other command line
>tools, as well as pretty much all of unix uses dashes like this:
>*--consumer-rebalance-listener*
>I don't really know the history of tis but let's move it to normal unix
>dashes across the board as well as examine the options for any 
>other>inconsistencies.
>
>-Jay
>
>
>On Thu, Feb 26, 2015 at 11:57 AM, Jiangjie Qin 
>wrote:
>
>> Hi Neha,
>>
>> Thanks for the comment. That’s a really good point.
>>
>> Originally I’m thinking about allowing user to tweak some parameter as
>> needed.
>> For example, some user might want to have ppeline enabled and can
>> tolerate reordering, some user might want to use acks=1 or acks=0, some
>> might want to move forward when error is encountered in callback.
>> So we don’t want to enforce all the settings of no.data.loss. Meanwhile
>>we
>> want to make the life easier for the users who want no data loss so they
>> don’t need to set the configs one by one, therefore we crated this
>>option.
>>
>> But as you suggested, we can probably make no.data.loss settings to be
>> default and removed the ―no.data.loss option, so if people want to tweak
>> the settngs, they can just change them, otherwise they get the default
>> no-data-loss settings.
>>
>> I’ll modify the KIP.
>>
>> Thanks.
>>
>> Jiangjie (Becket) Qin
>>
>> On 2/26/15, 8:58 AM, "Neha Narkhede"  wrote:
>>
>> >Hey Becket,
>> >
>> >The KIP proposes addition of a --no.data.loss command line option to
>>the
>> >MirrorMaker. Though when would the uer not want that option? I'm
>> >wondering
>> >what the benefit of providing that option is if every user would want
>>that
>> >for correct mirroring behavior.
>> >
>> >Other than that, the KIP looks great!
>> >
>> >Thanks,
>> >Neha
>> >
>> >On Wed, Feb 25, 2015 at 3:56 PM, Jiangjie Qin
>>
>> >wrote:
>> >
>> >> For 1), the current design allow you to do it. The customizable
>>message
>> >> handler takes in a ConsumerRecord and spit a List,
>>you
>> >>can
>> >> just put a topic for the ProducerRecord different from
>>ConsumerRecord.
>> >>
>> >> WRT performance, we did some test in LinkedIn, the performance looks
>> >>good
>> >> to us.
>> >>
>> >> Jiangjie (Becket) Qin
>> >>
>> >> On 2/25/15, 3:41 PM, "Bhavesh Mistry" 
>> >>wrote:
>> >>
>> >> >Hi Jiangjie,
>> >> >
>> >> >It might be too late.  But, I wanted to bring-up following use case
>>for
>> >> >adopting new MM:
>> >> >
>> >> >1) Ability to publish message rom src topic to different
>>destination
>> >> >topic
>> >> >via --overidenTopics=srcTopic:newDestinationTopic
>> >> >
>> >> >In order to adopt, new MM enhancement customer will compare
>> >>performance of
>> >> >new MM and data quality while running  old MM against same
>>destination
>> >> >cluster in Prd.
>> >> >
>> >> >Let me know if you agree to that or not.  Also, If yes, will be
>>able to
>> >> >able to provide this feature in release version.
>> >> >
>> >> >Thanks,
>> >> >
>> >> >Bhavesh
>> >> >
>> >> >
>> >> >On Tue, Feb 24, 2015 at 5:31 PM, Jiangjie Qin
>> >>
>> >> >wrote:
>> >> >
>> >> >> Sure! Just created the voting thread :)
>> >> >>
>> >> >> On 2/24/5, 4:44 PM, "Jay Kreps"  wrote:
>> >> >>
>> >> >> >Hey Jiangjie,
>> >> >> >
>> >> >> >Let's do an official vote so that we know hat we are voting on
>>and
>> >>we
>> >> >>are
>> >> >> >crisp on what the outcome was. This thread is very long :-
>> >> >> >
>> >> >> >-Jay
>> >> >> >
>> >> >> >On Tue, Feb 24, 2015 at 2:53 PM, Jiangjie Qin
>> >> >>
>> >> >> >wrote:
>> >> >> >
>> >> >> >> I updated the KIP page based on the discussion we had.
>> >> >> >>
>> >> >> >> Should I launch another vote or we can think of this mail
>>thread
>> >>has
>> >> >> >> already included a vote?
>> >> >> >>
>> >> >> >> Jiangjie (Becket) Qin
>> >> >> >>
>> >> >> >> On 2/11/15, 5:15 PM, "Neha Nakhede"  wrote:
>>  >>
>> >> >> >> >Thanks for the explanation, Joel! Would love to see the
>>results
>> >>of
>> >> >>the
>> >> >> >> >throughput experiment and I'm a +1 on everything els, ncluding
>> >>the
>> >> >> >> >rebalance callback and record handler.
>> >> >> >> >
>> >> >> >> >-Neha
>> >> >> >> >
>> >> >> >> >On Wed, Feb 11, 2015 at 1:13 PM Jay Kreps
>>
>> >> >>wrote:
>> >> >> >> >
>> >> >> >> >> Cool, I agree with all that.
>> >> >> >> >>
>> >> >> >> >> I agree about the need

Re: Review Request 31369: Patch for KAFKA-1982

2015-02-27 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31369/
---

(Updated Feb. 27, 2015, 7:08 p.m.)


Review request for kafka.


Bugs: KAFKA-1982
https://issues.apache.org/jira/browse/KAFKA-1982


Repository: kafka


Description
---

KAFKA-1982: change kafka.examples.Producer to use the new java producer


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java
 PRE-CREATION 
  examples/src/main/java/kafka/examples/Consumer.java 
13135b954f3078eeb7394822b0db25470b746f03 
  examples/src/main/java/kafka/examples/Producer.java 
96e98933148d07564c1b30ba8e805e2433c2adc8 

Diff: https://reviews.apache.org/r/31369/diff/


Testing
---


Thanks,

Ashish Singh



[jira] [Updated] (KAFKA-1982) change kafka.examples.Producer to use the new java producer

2015-02-27 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-1982:
--
Attachment: KAFKA-1982_2015-02-27_11:08:34.patch

> change kafka.examples.Producer to use the new java producer
> ---
>
> Key: KAFKA-1982
> URL: https://issues.apache.org/jira/browse/KAFKA-1982
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Ashish K Singh
>  Labels: newbie
> Attachments: KAFKA-1982.patch, KAFKA-1982_2015-02-24_10:34:51.patch, 
> KAFKA-1982_2015-02-24_20:45:48.patch, KAFKA-1982_2015-02-24_20:48:12.patch, 
> KAFKA-1982_2015-02-27_11:08:34.patch
>
>
> We need to change the example to use the new java producer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1982) change kafka.examples.Producer to use the new java producer

2015-02-27 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340633#comment-14340633
 ] 

Ashish K Singh commented on KAFKA-1982:
---

Updated reviewboard https://reviews.apache.org/r/31369/
 against branch trunk

> change kafka.examples.Producer to use the new java producer
> ---
>
> Key: KAFKA-1982
> URL: https://issues.apache.org/jira/browse/KAFKA-1982
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Ashish K Singh
>  Labels: newbie
> Attachments: KAFKA-1982.patch, KAFKA-1982_2015-02-24_10:34:51.patch, 
> KAFKA-1982_2015-02-24_20:45:48.patch, KAFKA-1982_2015-02-24_20:48:12.patch, 
> KAFKA-1982_2015-02-27_11:08:34.patch
>
>
> We need to change the example to use the new java producer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31369: Patch for KAFKA-1982

2015-02-27 Thread Ashish Singh


> On Feb. 26, 2015, 10:27 p.m., Jun Rao wrote:
> > examples/src/main/java/kafka/examples/Consumer.java, line 62
> > 
> >
> > It would be useful to print out the key as well.

Added


> On Feb. 26, 2015, 10:27 p.m., Jun Rao wrote:
> > examples/src/main/java/kafka/examples/Producer.java, line 53
> > 
> >
> > Perhaps we can create an IntegerSerializer and send messageNo as the 
> > key?

Done


> On Feb. 26, 2015, 10:27 p.m., Jun Rao wrote:
> > examples/src/main/java/kafka/examples/Producer.java, lines 81-88
> > 
> >
> > It's probably better to print each mesage in a single line and print 
> > out the key as well.

Done


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31369/#review74379
---


On Feb. 27, 2015, 7:08 p.m., Ashish Singh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31369/
> ---
> 
> (Updated Feb. 27, 2015, 7:08 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1982
> https://issues.apache.org/jira/browse/KAFKA-1982
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1982: change kafka.examples.Producer to use the new java producer
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java
>  PRE-CREATION 
>   examples/src/main/java/kafka/examples/Consumer.java 
> 13135b954f3078eeb7394822b0db25470b746f03 
>   examples/src/main/java/kafka/examples/Producer.java 
> 96e98933148d07564c1b30ba8e805e2433c2adc8 
> 
> Diff: https://reviews.apache.org/r/31369/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Ashish Singh
> 
>



[jira] [Commented] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340663#comment-14340663
 ] 

Jun Rao commented on KAFKA-1988:


Hi, Tong,

My suggestion is slightly different. My point is that If we change the behavior 
of o.a.k.c.u.Utils.abs(), it will impact the following code in 
o.a.k.c.p.i.Partitioner since a given key will now be mapped to a different 
partition.

// hash the key to choose a partition
return Utils.abs(Utils.murmur2(key)) % numPartitions;

To preserve the current behavior, I was suggesting that we define a local 
method in o.a.k.c.p.i.Partitioner that looks like the following.

private static int toPositive(int n) {
return n & 0x7fff;
}

and change the partitioning code to

// hash the key to choose a partition
return toPositive(Utils.murmur2(key)) % numPartitions;

This will preserve the partitioning behavior of the producer in o.a.k.c.p. The 
rest of the changes will be the same as your original patch. We don't have to 
change the partitioning code in DefaultEventHandler. The scala producer already 
uses a different hash function from the java producer in o.a.k.c.p. So, the 
partitioning won't be consistent btw the scala and the java producer anyway. 
The replacing of k.u.Utils.abs() with o.a.k.c.u.Utils.abs() doesn't have to be 
done in this jira since you are already handling it in KAFKA-1926.


> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers.
> 
>
> Key: KAFKA-1988
> URL: https://issues.apache.org/jira/browse/KAFKA-1988
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1988.patch, KAFKA-1988.patch
>
>
> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers. The method only returns intended value for positive 
> numbers. All negative numbers except the Integer.Min_Value will be returned 
> an unsigned integer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31369: Patch for KAFKA-1982

2015-02-27 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31369/#review74553
---


Thanks for the patch, Ashish. Its shaping up to be a very useful example. 
Two comments:

1. I think the ser/de should be part of the example and not in "common", I'm 
not sure integer ser/de is useful enough to be distributed with Kafka (although 
Jun can correct me if I got this wrong).

2. I saw a lot of discussion on the mailing list around using the new producer 
async vs. sync. This example shows the async path. Do we want to add another 
"sync" example where we do something like:
val future = producer.send(new ProducerRecord(topic,
 messageNo,
 messageStr), new DemoCallBack(startTime, messageNo, messageStr));
// this waits for send to complete
future.get

- Gwen Shapira


On Feb. 27, 2015, 7:08 p.m., Ashish Singh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31369/
> ---
> 
> (Updated Feb. 27, 2015, 7:08 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1982
> https://issues.apache.org/jira/browse/KAFKA-1982
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1982: change kafka.examples.Producer to use the new java producer
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java
>  PRE-CREATION 
>   examples/src/main/java/kafka/examples/Consumer.java 
> 13135b954f3078eeb7394822b0db25470b746f03 
>   examples/src/main/java/kafka/examples/Producer.java 
> 96e98933148d07564c1b30ba8e805e2433c2adc8 
> 
> Diff: https://reviews.apache.org/r/31369/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Ashish Singh
> 
>



[jira] [Commented] (KAFKA-1400) transient unit test failure in SocketServerTest

2015-02-27 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340701#comment-14340701
 ] 

Jun Rao commented on KAFKA-1400:


Yes, that's probably the goal of this test. The checking on the socket server 
may be a bit involved though since in addition to checking the socket for the 
acceptor, we probably need to check the sockets in each of the processors. 
Testing from the client seems simpler to me.

How about this: let me check in my patch. I will resolve this jira, but not 
close it. If you have a better patch, feel free to reopen the jira.

> transient unit test failure in SocketServerTest
> ---
>
> Key: KAFKA-1400
> URL: https://issues.apache.org/jira/browse/KAFKA-1400
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1400.patch, kafka-1400.patch
>
>
> Saw the following transient failure.
> kafka.network.SocketServerTest > testSocketsCloseOnShutdown FAILED 
> java.lang.AssertionError: Expected exception: java.net.SocketException 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-330) Add delete topic support

2015-02-27 Thread Andrew Pennebaker (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340709#comment-14340709
 ] 

Andrew Pennebaker commented on KAFKA-330:
-

Could the default config please set {code}delete.topic.enable=true{code} by 
default, so that kafka behaves more intuitively out of the box?

> Add delete topic support 
> -
>
> Key: KAFKA-330
> URL: https://issues.apache.org/jira/browse/KAFKA-330
> Project: Kafka
>  Issue Type: New Feature
>  Components: controller, log, replication
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: features, project
> Fix For: 0.8.1
>
> Attachments: KAFKA-330.patch, KAFKA-330_2014-01-28_15:19:20.patch, 
> KAFKA-330_2014-01-28_22:01:16.patch, KAFKA-330_2014-01-31_14:19:14.patch, 
> KAFKA-330_2014-01-31_17:45:25.patch, KAFKA-330_2014-02-01_11:30:32.patch, 
> KAFKA-330_2014-02-01_14:58:31.patch, KAFKA-330_2014-02-05_09:31:30.patch, 
> KAFKA-330_2014-02-06_07:48:40.patch, KAFKA-330_2014-02-06_09:42:38.patch, 
> KAFKA-330_2014-02-06_10:29:31.patch, KAFKA-330_2014-02-06_11:37:48.patch, 
> KAFKA-330_2014-02-08_11:07:37.patch, kafka-330-v1.patch, kafka-330-v2.patch
>
>
> One proposal of this API is here - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+replication+detailed+design+V2#KafkareplicationdetaileddesignV2-Deletetopic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1400) transient unit test failure in SocketServerTest

2015-02-27 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1400:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review. Committed to trunk.

> transient unit test failure in SocketServerTest
> ---
>
> Key: KAFKA-1400
> URL: https://issues.apache.org/jira/browse/KAFKA-1400
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1400.patch, kafka-1400.patch
>
>
> Saw the following transient failure.
> kafka.network.SocketServerTest > testSocketsCloseOnShutdown FAILED 
> java.lang.AssertionError: Expected exception: java.net.SocketException 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-330) Add delete topic support

2015-02-27 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340767#comment-14340767
 ] 

Jay Kreps commented on KAFKA-330:
-

+1 on enabling by default.

> Add delete topic support 
> -
>
> Key: KAFKA-330
> URL: https://issues.apache.org/jira/browse/KAFKA-330
> Project: Kafka
>  Issue Type: New Feature
>  Components: controller, log, replication
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: features, project
> Fix For: 0.8.1
>
> Attachments: KAFKA-330.patch, KAFKA-330_2014-01-28_15:19:20.patch, 
> KAFKA-330_2014-01-28_22:01:16.patch, KAFKA-330_2014-01-31_14:19:14.patch, 
> KAFKA-330_2014-01-31_17:45:25.patch, KAFKA-330_2014-02-01_11:30:32.patch, 
> KAFKA-330_2014-02-01_14:58:31.patch, KAFKA-330_2014-02-05_09:31:30.patch, 
> KAFKA-330_2014-02-06_07:48:40.patch, KAFKA-330_2014-02-06_09:42:38.patch, 
> KAFKA-330_2014-02-06_10:29:31.patch, KAFKA-330_2014-02-06_11:37:48.patch, 
> KAFKA-330_2014-02-08_11:07:37.patch, kafka-330-v1.patch, kafka-330-v2.patch
>
>
> One proposal of this API is here - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+replication+detailed+design+V2#KafkareplicationdetaileddesignV2-Deletetopic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-02-27 Thread Jay Kreps
Yeah it will break the existing usage but personally I think it is worth it
to be standard across all our tools.

-Jay

On Fri, Feb 27, 2015 at 9:53 AM, Jiangjie Qin 
wrote:

> Hi Jay,
>
> I just modified the KIP. The only concern I have about this change is that
> it will break existing deployments. And we need to change the command line
> arguments format for other tools as well. It is defiitely better that we
> conform to the unix standard. It is just I am not sure if the change worth
> it given we have been using this argument format for a while.
>
> Jiangjie (Becket) Qin
>
> On 2/26/15, 8:40 PM, "Jay Kreps"  wrote:
>
> >Can we change the command line arguments for mm to match the command line
> >arguments elsewhere. This proposal seems to have two formats:
> >*--consumer.rebalance.listener*
> >and
> >*--abortOnSendFail*
> >The '.' separators for command line options predate this JIRA but I think
> >the new camelCase option is a new invention. All the other command line
> >tools, as well as pretty much all of unix uses dashes like this:
> >*--consumer-rebalance-listener*
> >I don't really know the history of tis but let's move it to normal unix
> >dashes across the board as well as examine the options for any
> other>inconsistencies.
> >
> >-Jay
> >
> >
> >On Thu, Feb 26, 2015 at 11:57 AM, Jiangjie Qin  >
> >wrote:
> >
> >> Hi Neha,
> >>
> >> Thanks for the comment. That’s a really good point.
> >>
> >> Originally I’m thinking about allowing user to tweak some parameter as
> >> needed.
> >> For example, some user might want to have ppeline enabled and can
> >> tolerate reordering, some user might want to use acks=1 or acks=0, some
> >> might want to move forward when error is encountered in callback.
> >> So we don’t want to enforce all the settings of no.data.loss. Meanwhile
> >>we
> >> want to make the life easier for the users who want no data loss so they
> >> don’t need to set the configs one by one, therefore we crated this
> >>option.
> >>
> >> But as you suggested, we can probably make no.data.loss settings to be
> >> default and removed the ―no.data.loss option, so if people want to tweak
> >> the settngs, they can just change them, otherwise they get the default
> >> no-data-loss settings.
> >>
> >> I’ll modify the KIP.
> >>
> >> Thanks.
> >>
> >> Jiangjie (Becket) Qin
> >>
> >> On 2/26/15, 8:58 AM, "Neha Narkhede"  wrote:
> >>
> >> >Hey Becket,
> >> >
> >> >The KIP proposes addition of a --no.data.loss command line option to
> >>the
> >> >MirrorMaker. Though when would the uer not want that option? I'm
> >> >wondering
> >> >what the benefit of providing that option is if every user would want
> >>that
> >> >for correct mirroring behavior.
> >> >
> >> >Other than that, the KIP looks great!
> >> >
> >> >Thanks,
> >> >Neha
> >> >
> >> >On Wed, Feb 25, 2015 at 3:56 PM, Jiangjie Qin
> >>
> >> >wrote:
> >> >
> >> >> For 1), the current design allow you to do it. The customizable
> >>message
> >> >> handler takes in a ConsumerRecord and spit a List,
> >>you
> >> >>can
> >> >> just put a topic for the ProducerRecord different from
> >>ConsumerRecord.
> >> >>
> >> >> WRT performance, we did some test in LinkedIn, the performance looks
> >> >>good
> >> >> to us.
> >> >>
> >> >> Jiangjie (Becket) Qin
> >> >>
> >> >> On 2/25/15, 3:41 PM, "Bhavesh Mistry" 
> >> >>wrote:
> >> >>
> >> >> >Hi Jiangjie,
> >> >> >
> >> >> >It might be too late.  But, I wanted to bring-up following use case
> >>for
> >> >> >adopting new MM:
> >> >> >
> >> >> >1) Ability to publish message rom src topic to different
> >>destination
> >> >> >topic
> >> >> >via --overidenTopics=srcTopic:newDestinationTopic
> >> >> >
> >> >> >In order to adopt, new MM enhancement customer will compare
> >> >>performance of
> >> >> >new MM and data quality while running  old MM against same
> >>destination
> >> >> >cluster in Prd.
> >> >> >
> >> >> >Let me know if you agree to that or not.  Also, If yes, will be
> >>able to
> >> >> >able to provide this feature in release version.
> >> >> >
> >> >> >Thanks,
> >> >> >
> >> >> >Bhavesh
> >> >> >
> >> >> >
> >> >> >On Tue, Feb 24, 2015 at 5:31 PM, Jiangjie Qin
> >> >>
> >> >> >wrote:
> >> >> >
> >> >> >> Sure! Just created the voting thread :)
> >> >> >>
> >> >> >> On 2/24/5, 4:44 PM, "Jay Kreps"  wrote:
> >> >> >>
> >> >> >> >Hey Jiangjie,
> >> >> >> >
> >> >> >> >Let's do an official vote so that we know hat we are voting on
> >>and
> >> >>we
> >> >> >>are
> >> >> >> >crisp on what the outcome was. This thread is very long :-
> >> >> >> >
> >> >> >> >-Jay
> >> >> >> >
> >> >> >> >On Tue, Feb 24, 2015 at 2:53 PM, Jiangjie Qin
> >> >> >>
> >> >> >> >wrote:
> >> >> >> >
> >> >> >> >> I updated the KIP page based on the discussion we had.
> >> >> >> >>
> >> >> >> >> Should I launch another vote or we can think of this mail
> >>thread
> >> >>has
> >> >> >> >> already included a vote?
> >> >> >> >>
> >> >> >> >> Jiangjie (Becket) Qin
> >> >> >> >>
> >> >> >> >> On 2/11/15, 5:15 PM, "Neha Nakhede"  wrote:

Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-02-27 Thread Gwen Shapira
The biggest gap in tool standardization and MirrorMaker is the fact
that MirrorMaker takes 2 configuration files as inputs with required
parameters that can't be set on command line.

If we are breaking back-compatibility, perhaps we can standardize this part too?

On Fri, Feb 27, 2015 at 12:36 PM, Jay Kreps  wrote:
> Yeah it will break the existing usage but personally I think it is worth it
> to be standard across all our tools.
>
> -Jay
>
> On Fri, Feb 27, 2015 at 9:53 AM, Jiangjie Qin 
> wrote:
>
>> Hi Jay,
>>
>> I just modified the KIP. The only concern I have about this change is that
>> it will break existing deployments. And we need to change the command line
>> arguments format for other tools as well. It is defiitely better that we
>> conform to the unix standard. It is just I am not sure if the change worth
>> it given we have been using this argument format for a while.
>>
>> Jiangjie (Becket) Qin
>>
>> On 2/26/15, 8:40 PM, "Jay Kreps"  wrote:
>>
>> >Can we change the command line arguments for mm to match the command line
>> >arguments elsewhere. This proposal seems to have two formats:
>> >*--consumer.rebalance.listener*
>> >and
>> >*--abortOnSendFail*
>> >The '.' separators for command line options predate this JIRA but I think
>> >the new camelCase option is a new invention. All the other command line
>> >tools, as well as pretty much all of unix uses dashes like this:
>> >*--consumer-rebalance-listener*
>> >I don't really know the history of tis but let's move it to normal unix
>> >dashes across the board as well as examine the options for any
>> other>inconsistencies.
>> >
>> >-Jay
>> >
>> >
>> >On Thu, Feb 26, 2015 at 11:57 AM, Jiangjie Qin > >
>> >wrote:
>> >
>> >> Hi Neha,
>> >>
>> >> Thanks for the comment. That’s a really good point.
>> >>
>> >> Originally I’m thinking about allowing user to tweak some parameter as
>> >> needed.
>> >> For example, some user might want to have ppeline enabled and can
>> >> tolerate reordering, some user might want to use acks=1 or acks=0, some
>> >> might want to move forward when error is encountered in callback.
>> >> So we don’t want to enforce all the settings of no.data.loss. Meanwhile
>> >>we
>> >> want to make the life easier for the users who want no data loss so they
>> >> don’t need to set the configs one by one, therefore we crated this
>> >>option.
>> >>
>> >> But as you suggested, we can probably make no.data.loss settings to be
>> >> default and removed the ―no.data.loss option, so if people want to tweak
>> >> the settngs, they can just change them, otherwise they get the default
>> >> no-data-loss settings.
>> >>
>> >> I’ll modify the KIP.
>> >>
>> >> Thanks.
>> >>
>> >> Jiangjie (Becket) Qin
>> >>
>> >> On 2/26/15, 8:58 AM, "Neha Narkhede"  wrote:
>> >>
>> >> >Hey Becket,
>> >> >
>> >> >The KIP proposes addition of a --no.data.loss command line option to
>> >>the
>> >> >MirrorMaker. Though when would the uer not want that option? I'm
>> >> >wondering
>> >> >what the benefit of providing that option is if every user would want
>> >>that
>> >> >for correct mirroring behavior.
>> >> >
>> >> >Other than that, the KIP looks great!
>> >> >
>> >> >Thanks,
>> >> >Neha
>> >> >
>> >> >On Wed, Feb 25, 2015 at 3:56 PM, Jiangjie Qin
>> >>
>> >> >wrote:
>> >> >
>> >> >> For 1), the current design allow you to do it. The customizable
>> >>message
>> >> >> handler takes in a ConsumerRecord and spit a List,
>> >>you
>> >> >>can
>> >> >> just put a topic for the ProducerRecord different from
>> >>ConsumerRecord.
>> >> >>
>> >> >> WRT performance, we did some test in LinkedIn, the performance looks
>> >> >>good
>> >> >> to us.
>> >> >>
>> >> >> Jiangjie (Becket) Qin
>> >> >>
>> >> >> On 2/25/15, 3:41 PM, "Bhavesh Mistry" 
>> >> >>wrote:
>> >> >>
>> >> >> >Hi Jiangjie,
>> >> >> >
>> >> >> >It might be too late.  But, I wanted to bring-up following use case
>> >>for
>> >> >> >adopting new MM:
>> >> >> >
>> >> >> >1) Ability to publish message rom src topic to different
>> >>destination
>> >> >> >topic
>> >> >> >via --overidenTopics=srcTopic:newDestinationTopic
>> >> >> >
>> >> >> >In order to adopt, new MM enhancement customer will compare
>> >> >>performance of
>> >> >> >new MM and data quality while running  old MM against same
>> >>destination
>> >> >> >cluster in Prd.
>> >> >> >
>> >> >> >Let me know if you agree to that or not.  Also, If yes, will be
>> >>able to
>> >> >> >able to provide this feature in release version.
>> >> >> >
>> >> >> >Thanks,
>> >> >> >
>> >> >> >Bhavesh
>> >> >> >
>> >> >> >
>> >> >> >On Tue, Feb 24, 2015 at 5:31 PM, Jiangjie Qin
>> >> >>
>> >> >> >wrote:
>> >> >> >
>> >> >> >> Sure! Just created the voting thread :)
>> >> >> >>
>> >> >> >> On 2/24/5, 4:44 PM, "Jay Kreps"  wrote:
>> >> >> >>
>> >> >> >> >Hey Jiangjie,
>> >> >> >> >
>> >> >> >> >Let's do an official vote so that we know hat we are voting on
>> >>and
>> >> >>we
>> >> >> >>are
>> >> >> >> >crisp on what the outcome was. This thread is very long :-
>> >> >> >> >

Re: Review Request 29912: Patch for KAFKA-1852

2015-02-27 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29912/#review74569
---

Ship it!


Minor locking issue noted below. I can take care of that.

This obviously does not cover the case of committing offsets to a topic that is 
currently being deleted. I think that can be done in a separate jira. Can you 
file one?


core/src/main/scala/kafka/server/KafkaServer.scala


This can just be a val



core/src/main/scala/kafka/server/MetadataCache.scala


Should probably do this inReadLock


- Joel Koshy


On Feb. 18, 2015, 9:13 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29912/
> ---
> 
> (Updated Feb. 18, 2015, 9:13 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1852
> https://issues.apache.org/jira/browse/KAFKA-1852
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic. Added 
> contains method to MetadataCache.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 703886a1d48e6d2271da67f8b89514a6950278dd 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 7e5ddcb9be8fcef3df6ebc82a13ef44ef95f73ae 
>   core/src/main/scala/kafka/server/MetadataCache.scala 
> 4c70aa7e0157b85de5e24736ebf487239c4571d0 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 83d52643028c5628057dc0aa29819becfda61fdb 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 
> 
> Diff: https://reviews.apache.org/r/29912/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



[jira] [Commented] (KAFKA-1400) transient unit test failure in SocketServerTest

2015-02-27 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340785#comment-14340785
 ] 

Gwen Shapira commented on KAFKA-1400:
-

Awesome. I'm just happy to see this fixed :)

It drove me mad, especially since I wasn't sure if its just my multi-port work, 
or a more general problem...

> transient unit test failure in SocketServerTest
> ---
>
> Key: KAFKA-1400
> URL: https://issues.apache.org/jira/browse/KAFKA-1400
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1400.patch, kafka-1400.patch
>
>
> Saw the following transient failure.
> kafka.network.SocketServerTest > testSocketsCloseOnShutdown FAILED 
> java.lang.AssertionError: Expected exception: java.net.SocketException 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-02-27 Thread Gwen Shapira
Make sense to me.

On Fri, Feb 27, 2015 at 12:59 PM, Jiangjie Qin
 wrote:
> I think it probably needs another KIP to discuss the command line tool
> standardization because it is essentially a cross boad user interface
> change.
> For this specific KIP, I believe the scope is just to make sure we fix
> data loss issue and provide useful function support.
> How about this? I’ll change back the command line argument to use dot and
> create another KIP to address the tools argument standardization. And we
> will do it in another path.
>
> Jiangjie (Becket) Qin
>
> On 2/27/15, 12:43 PM, "Gwen Shapira"  wrote:
>
>>The biggest gap in tool standardization and MirrorMaker is the fact
>>that MirrorMaker takes 2 configuration files as inputs with required
>>parameters that can't be set on command line.
>>
>>If we are breaking back-compatibility, perhaps we can sndardize this
>>part too?
>>
>>On Fri, Feb 27, 2015 at 12:36 PM, Jay Kreps  wrote:
>>> Yeah it will break the existing usage but personally I think it is
>>>worth it
>>> to be standard across all our tools.
>>>
>>> -Jay
>>>
>>> On Fri, Feb 27, 2015 at 9:53 AM, Jiangjie Qin
>>>
>>> wrote:
>>>
 Hi Jay,

 I just modified the KIP. The only concern I have about this change is
that
 it will break existing deployments. And we need to change the command
line
 arguments format for other tools as well. It is defiitely better that
we
 conform to the unix standard. It is just I am not sure if the change
worth
 it given we have been using this argument format for a while.

 Jiangjie (Becket) Qin

 On /26/15, 8:40 PM, "Jay Kreps"  wrote:

 >Can we change the command line arguments for mm to match the command
line
 >arguments elsewhere. This proposal seems to have two formats:
 >*--consumer.rebalance.listener*
 >and
 >*--abortOnSendFail*
 >The '.' separators for comand line options predate this JIRA but I
think
 >the new camelCase option is a new invention. All the other command
line
 >tools, as well as pretty much all of unix uses dashes like this:
 >*--consumer-rebalance-listener*
 >I don't really know the hitory of tis but let's move it to normal
unix
 >dashes across the board as well as examine the options for any
 other>inconsistencies.
 >
 >-Jay
 >
 >
 >On Thu, Feb 26, 2015 at 11:57 AM, Jiangjie Qin
>>> >
 >wrote:
 >
 >> Hi Neha,
 >>
 >> Thanks for the comment. Tht’s a really good point.
 >>
 >> Originally I’m thinking about allowing user to tweak some parameter
as
 >> needed.
 >> For example, some user might want to have ppeline enabled and can
 >> tolerate reordering, some user might want to use acks=1 or acks=0,
some
 >> might want to move forward when error is encountered in callback
 >> So we don’t want to enforce all the settings of no.data.loss.
Meanwhile
 >>we
 >> want to make the life easier for the users who want no data loss so
they
 >> don’t need to set the configs one by one, therefore we crated this
 >>option.
 >>
 >> But as you suggested, we can probably make no.data.loss settings to
be
 >> default and removed the ―no.data.loss option, so if people want to
tweak
 >> the settngs, they can just change them, otherwise they get the
defau
 >> no-data-loss settings.
 >>
 >> I’ll modify the KIP.
 >>
 >> Thanks.
 >>
 >> Jiangjie (Becket) Qin
 >>
 >> On 2/26/15, 8:58 AM, "Neha Narkhede"  wrote:
 >>
 >> >Hey Becket,
 >> >
 >> >The KIP proposes addition of a --no.data.loss command line option
to
 >>the
 >> >MirrorMaker. Though when would the uer not want that option? I'm
 >> >wondering
 >> >what the benefit of providing that option is if every user would
want
 >>that
 >> >for correct mirroring behavior.
 >> >
 >> >Other than that, the KIP looks great!
 >> >
 >> >Thanks,
 >> >Neha
 >> >
 >> >On Wed, Feb 25, 2015 at 3:56 PM, Jiangjie Qin
 >>
 >> >wrote:
 >> >
 >> >> For 1), the current design allow you to do it. The customizable
 >>message
 >> > handler takes in a ConsumerRecord and spit a
List,
 >>you
 >> >>can
 >> >> just put a topic for the ProducerRecord different from
 >>ConsumerRecord.
 >> >>
 >> >> WRT performance, we did some test in LinkedIn, the performance
looks
 >> >>good
 >> >> to us.
 >> >>
 >> >> Jiangjie (Becket) Qin
 >> >>
 >> >> On 2/25/15, 3:41 PM, "Bhavesh Mistry"

 >> >>wrote:
 >> >>
 >> >> >Hi Jiangjie,
 >> >> >
 >> >> >It might be too late.  But, I wanted to bring-up following use
case
 >>for
 >> >> >adopting new MM:
 >> >> >
 >> >> >1) Ability to publish message rom sc topic to different
 >>destination
 >> >> >topic
 >> >> >via --overidenTopics

Re: Review Request 29647: Patch for KAFKA-1697

2015-02-27 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29647/#review74592
---


Sorry for the late review. Have one comment below.


core/src/main/scala/kafka/cluster/Partition.scala


Should we just remove requiredAcks completely since 
checkEnoughReplicasReachOffset() will only be called when requiredAcks is -1?


- Jun Rao


On Feb. 13, 2015, 2:57 a.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29647/
> ---
> 
> (Updated Feb. 13, 2015, 2:57 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1697
> https://issues.apache.org/jira/browse/KAFKA-1697
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> added early handling of invalid number of acks to handler and a test
> 
> 
> merging with current trunk
> 
> 
> moved check for legal requiredAcks to append and fixed the tests accordingly
> 
> 
> changing exception back to retriable
> 
> 
> cleaning unused exceptions
> 
> 
> refactored appendToLog for clarity
> 
> 
> KAFKA-1948; Fix ConsumerTest.testPartitionReassignmentCallback handling 
> issue; reviewed by Gwen Shapira
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> KAFKA-1697
> 
> 
> improved readability of append rules
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/errors/InvalidRequiredAcksException.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
>  a6107b818947d6d6818c85cdffcb2b13f69a55c0 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> a8deac4ce5149129d0a6f44c0526af9d55649a36 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> e6ad8be5e33b6fb31c078ad78f8de709869ddc04 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> 66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 6ee7d8819a9ef923f3a65c865a0a3d8ded8845f0 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
>   core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
> 798f035df52e405176f558806584ce25e8c392ac 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> a1f72f8c2042ff2a43af503b2e06f84706dad9db 
>   core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
> faa907131ed0aa94a7eacb78c1ffb576062be87a 
> 
> Diff: https://reviews.apache.org/r/29647/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Gwen Shapira
> 
>



ConsumerTest

2015-02-27 Thread Neha Narkhede
Would anyone object if I commented out the kafka.api.ConsumerTest out until
it is fixed? It hangs and is making accepting patches very time-consuming.

-- 
Thanks,
Neha


Re: ConsumerTest

2015-02-27 Thread Harsha
+1

On Fri, Feb 27, 2015, at 01:37 PM, Neha Narkhede wrote:
> Would anyone object if I commented out the kafka.api.ConsumerTest out
> until
> it is fixed? It hangs and is making accepting patches very
> time-consuming.
> 
> -- 
> Thanks,
> Neha


Re: ConsumerTest

2015-02-27 Thread Gwen Shapira
Actually, we will greatly appreciate :)

On Fri, Feb 27, 2015 at 1:37 PM, Neha Narkhede  wrote:
> Would anyone object if I commented out the kafka.api.ConsumerTest out until
> it is fixed? It hangs and is making accepting patches very time-consuming.
>
> --
> Thanks,
> Neha


Re: ConsumerTest

2015-02-27 Thread Jiangjie Qin
+1

On 2/27/15, 1:41 PM, "Harsha"  wrote:

>+1
>
>On Fri, Feb 27, 2015, at 01:37 PM, Neha Narkhede wrote:
>> Would anyone object if I commented out the kafka.api.ConsumerTest out
>> until
>> it is fixed? It hangs and is making accepting patches very
>> time-consuming.
>> 
>> -- 
>> Thanks,
>> Neha



Re: ConsumerTest

2015-02-27 Thread Ashish Singh
+1

On Fri, Feb 27, 2015 at 1:42 PM, Gwen Shapira  wrote:

> Actually, we will greatly appreciate :)
>
> On Fri, Feb 27, 2015 at 1:37 PM, Neha Narkhede  wrote:
> > Would anyone object if I commented out the kafka.api.ConsumerTest out
> until
> > it is fixed? It hangs and is making accepting patches very
> time-consuming.
> >
> > --
> > Thanks,
> > Neha
>



-- 

Regards,
Ashish


Re: Review Request 29912: Patch for KAFKA-1852

2015-02-27 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29912/
---

(Updated Feb. 27, 2015, 9:50 p.m.)


Review request for kafka.


Bugs: KAFKA-1852
https://issues.apache.org/jira/browse/KAFKA-1852


Repository: kafka


Description (updated)
---

KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic. Added 
contains method to MetadataCache.


KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.


KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.


KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.


Diffs (updated)
-

  core/src/main/scala/kafka/server/KafkaApis.scala 
703886a1d48e6d2271da67f8b89514a6950278dd 
  core/src/main/scala/kafka/server/KafkaServer.scala 
426e522fc9819a0fc0f4e8269033552d716eb066 
  core/src/main/scala/kafka/server/MetadataCache.scala 
4c70aa7e0157b85de5e24736ebf487239c4571d0 
  core/src/main/scala/kafka/server/OffsetManager.scala 
83d52643028c5628057dc0aa29819becfda61fdb 
  core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
a2bb8855c3c0586b6b45b53ce534edee31b3bd12 

Diff: https://reviews.apache.org/r/29912/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



[jira] [Updated] (KAFKA-1852) OffsetCommitRequest can commit offset on unknown topic

2015-02-27 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1852:
--
Attachment: KAFKA-1852_2015-02-27_13:50:34.patch

> OffsetCommitRequest can commit offset on unknown topic
> --
>
> Key: KAFKA-1852
> URL: https://issues.apache.org/jira/browse/KAFKA-1852
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1852.patch, KAFKA-1852_2015-01-19_10:44:01.patch, 
> KAFKA-1852_2015-02-12_16:46:10.patch, KAFKA-1852_2015-02-16_13:21:46.patch, 
> KAFKA-1852_2015-02-18_13:13:17.patch, KAFKA-1852_2015-02-27_13:50:34.patch
>
>
> Currently, we allow an offset to be committed to Kafka, even when the 
> topic/partition for the offset doesn't exist. We probably should disallow 
> that and send an error back in that case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1852) OffsetCommitRequest can commit offset on unknown topic

2015-02-27 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340874#comment-14340874
 ] 

Sriharsha Chintalapani commented on KAFKA-1852:
---

Updated reviewboard https://reviews.apache.org/r/29912/diff/
 against branch origin/trunk

> OffsetCommitRequest can commit offset on unknown topic
> --
>
> Key: KAFKA-1852
> URL: https://issues.apache.org/jira/browse/KAFKA-1852
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1852.patch, KAFKA-1852_2015-01-19_10:44:01.patch, 
> KAFKA-1852_2015-02-12_16:46:10.patch, KAFKA-1852_2015-02-16_13:21:46.patch, 
> KAFKA-1852_2015-02-18_13:13:17.patch, KAFKA-1852_2015-02-27_13:50:34.patch
>
>
> Currently, we allow an offset to be committed to Kafka, even when the 
> topic/partition for the offset doesn't exist. We probably should disallow 
> that and send an error back in that case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29912: Patch for KAFKA-1852

2015-02-27 Thread Sriharsha Chintalapani


> On Feb. 27, 2015, 8:47 p.m., Joel Koshy wrote:
> > Minor locking issue noted below. I can take care of that.
> > 
> > This obviously does not cover the case of committing offsets to a topic 
> > that is currently being deleted. I think that can be done in a separate 
> > jira. Can you file one?

Joel,
  Will open a new and send PR. Thanks.


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29912/#review74569
---


On Feb. 27, 2015, 9:50 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29912/
> ---
> 
> (Updated Feb. 27, 2015, 9:50 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1852
> https://issues.apache.org/jira/browse/KAFKA-1852
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic. Added 
> contains method to MetadataCache.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 703886a1d48e6d2271da67f8b89514a6950278dd 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 426e522fc9819a0fc0f4e8269033552d716eb066 
>   core/src/main/scala/kafka/server/MetadataCache.scala 
> 4c70aa7e0157b85de5e24736ebf487239c4571d0 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 83d52643028c5628057dc0aa29819becfda61fdb 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> a2bb8855c3c0586b6b45b53ce534edee31b3bd12 
> 
> Diff: https://reviews.apache.org/r/29912/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



Re: ConsumerTest

2015-02-27 Thread Neha Narkhede
Wow. That was quick :-)
Any committers who would also like to give a +1?

On Fri, Feb 27, 2015 at 1:44 PM, Ashish Singh  wrote:

> +1
>
> On Fri, Feb 27, 2015 at 1:42 PM, Gwen Shapira 
> wrote:
>
> > Actually, we will greatly appreciate :)
> >
> > On Fri, Feb 27, 2015 at 1:37 PM, Neha Narkhede 
> wrote:
> > > Would anyone object if I commented out the kafka.api.ConsumerTest out
> > until
> > > it is fixed? It hangs and is making accepting patches very
> > time-consuming.
> > >
> > > --
> > > Thanks,
> > > Neha
> >
>
>
>
> --
>
> Regards,
> Ashish
>



-- 
Thanks,
Neha


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2015-02-27 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340883#comment-14340883
 ] 

Gwen Shapira commented on KAFKA-1501:
-

Is there anything pending here? Any reason not to commit this patch and reduce 
the test failures?

[~guozhang] [~jkreps] [~junrao] 



> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
> KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, test-100.out, test-27.out, 
> test-29.out, test-32.out, test-35.out, test-38.out, test-4.out, test-42.out, 
> test-45.out, test-46.out, test-51.out, test-55.out, test-58.out, test-59.out, 
> test-60.out, test-69.out, test-72.out, test-74.out, test-76.out, test-84.out, 
> test-87.out, test-91.out, test-92.out
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ConsumerTest

2015-02-27 Thread Joel Koshy
+1

On Fri, Feb 27, 2015 at 01:50:07PM -0800, Neha Narkhede wrote:
> Wow. That was quick :-)
> Any committers who would also like to give a +1?
> 
> On Fri, Feb 27, 2015 at 1:44 PM, Ashish Singh  wrote:
> 
> > +1
> >
> > On Fri, Feb 27, 2015 at 1:42 PM, Gwen Shapira 
> > wrote:
> >
> > > Actually, we will greatly appreciate :)
> > >
> > > On Fri, Feb 27, 2015 at 1:37 PM, Neha Narkhede 
> > wrote:
> > > > Would anyone object if I commented out the kafka.api.ConsumerTest out
> > > until
> > > > it is fixed? It hangs and is making accepting patches very
> > > time-consuming.
> > > >
> > > > --
> > > > Thanks,
> > > > Neha
> > >
> >
> >
> >
> > --
> >
> > Regards,
> > Ashish
> >
> 
> 
> 
> -- 
> Thanks,
> Neha



Re: Review Request 31256: Patch for KAFKA-1887

2015-02-27 Thread Sriharsha Chintalapani


> On Feb. 21, 2015, 7:03 p.m., Neha Narkhede wrote:
> > core/src/main/scala/kafka/server/KafkaServer.scala, line 317
> > 
> >
> > Overall, this change makes sense to me. However, now the startup and 
> > shutdown sequence is complicated and tricky to understand. It will be very 
> > helpful to document your findings in a comment on both startup and shutdown 
> > on why the sequence has to be this way. Also let's leave a note on being 
> > very careful before changing the sequence.

Neha, Sorry didn't see the comments. Will udpate the patch with comments and 
send new PR. Thanks for the reminder Gwen :).


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31256/#review73413
---


On Feb. 21, 2015, 9:12 a.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31256/
> ---
> 
> (Updated Feb. 21, 2015, 9:12 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1887
> https://issues.apache.org/jira/browse/KAFKA-1887
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1887. controller error message on shutting the last broker.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 7e5ddcb9be8fcef3df6ebc82a13ef44ef95f73ae 
>   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
> ba48a636dd0b0ed06646d56bb36aa3d43228604f 
> 
> Diff: https://reviews.apache.org/r/31256/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



Re: ConsumerTest

2015-02-27 Thread Neha Narkhede
Thanks for the quick responses. I deleted it for now, since commenting out
code is silly given that you can revive the file from version control
later.

On Fri, Feb 27, 2015 at 2:03 PM, Joel Koshy  wrote:

> +1
>
> On Fri, Feb 27, 2015 at 01:50:07PM -0800, Neha Narkhede wrote:
> > Wow. That was quick :-)
> > Any committers who would also like to give a +1?
> >
> > On Fri, Feb 27, 2015 at 1:44 PM, Ashish Singh 
> wrote:
> >
> > > +1
> > >
> > > On Fri, Feb 27, 2015 at 1:42 PM, Gwen Shapira 
> > > wrote:
> > >
> > > > Actually, we will greatly appreciate :)
> > > >
> > > > On Fri, Feb 27, 2015 at 1:37 PM, Neha Narkhede 
> > > wrote:
> > > > > Would anyone object if I commented out the kafka.api.ConsumerTest
> out
> > > > until
> > > > > it is fixed? It hangs and is making accepting patches very
> > > > time-consuming.
> > > > >
> > > > > --
> > > > > Thanks,
> > > > > Neha
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Regards,
> > > Ashish
> > >
> >
> >
> >
> > --
> > Thanks,
> > Neha
>
>


-- 
Thanks,
Neha


Re: Review Request 24704: Patch for KAFKA-1499

2015-02-27 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24704/#review74612
---


Sorry for the late review. Just a minor comment below.


core/src/main/scala/kafka/message/CompressionCodec.scala


Do we need the inner bracket on compressionType.toLowerCase()?


- Jun Rao


On Dec. 26, 2014, 4:09 p.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24704/
> ---
> 
> (Updated Dec. 26, 2014, 4:09 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1499
> https://issues.apache.org/jira/browse/KAFKA-1499
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Support given for Broker-side compression, Addersing Joel's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/log/Log.scala 
> 024506cd00556a0037c0b3b6b603da32968b69ab 
>   core/src/main/scala/kafka/log/LogConfig.scala 
> ca7a99e99f641b2694848b88bf4ae94657071d03 
>   core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
> 788c7864bc881b935975ab4a4e877b690e65f1f1 
>   core/src/main/scala/kafka/message/CompressionCodec.scala 
> 9439d2bc29a0c2327085f08577c6ce1b01db1489 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 6e26c5436feb4629d17f199011f3ebb674aa767f 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
>   core/src/test/scala/kafka/log/LogConfigTest.scala 
> 99b0df7b69c5e0a1b6251c54592c6ef63b1800fe 
>   core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala 
> 4e45d965bc423192ac704883ee75e9727006f89b 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 2377abe4933e065d037828a214c3a87e1773a8ef 
> 
> Diff: https://reviews.apache.org/r/24704/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>



[jira] [Commented] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Tong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340959#comment-14340959
 ] 

Tong Li commented on KAFKA-1988:


Run Rao, I got the point now. Thanks so much for your very detailed 
explanation. A patch is coming in few minutes with only that.

> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers.
> 
>
> Key: KAFKA-1988
> URL: https://issues.apache.org/jira/browse/KAFKA-1988
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1988.patch, KAFKA-1988.patch
>
>
> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers. The method only returns intended value for positive 
> numbers. All negative numbers except the Integer.Min_Value will be returned 
> an unsigned integer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29912: Patch for KAFKA-1852

2015-02-27 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29912/#review74614
---

Ship it!


Ship It!

- Joel Koshy


On Feb. 27, 2015, 9:50 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29912/
> ---
> 
> (Updated Feb. 27, 2015, 9:50 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1852
> https://issues.apache.org/jira/browse/KAFKA-1852
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic. Added 
> contains method to MetadataCache.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> KAFKA-1852. OffsetCommitRequest can commit offset on unknown topic.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 703886a1d48e6d2271da67f8b89514a6950278dd 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 426e522fc9819a0fc0f4e8269033552d716eb066 
>   core/src/main/scala/kafka/server/MetadataCache.scala 
> 4c70aa7e0157b85de5e24736ebf487239c4571d0 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 83d52643028c5628057dc0aa29819becfda61fdb 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> a2bb8855c3c0586b6b45b53ce534edee31b3bd12 
> 
> Diff: https://reviews.apache.org/r/29912/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



Re: Review Request 24704: Patch for KAFKA-1499

2015-02-27 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24704/#review74615
---



core/src/main/scala/kafka/message/CompressionCodec.scala


Not sure what you mean - can you elaborate?


- Joel Koshy


On Dec. 26, 2014, 4:09 p.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24704/
> ---
> 
> (Updated Dec. 26, 2014, 4:09 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1499
> https://issues.apache.org/jira/browse/KAFKA-1499
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Support given for Broker-side compression, Addersing Joel's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/log/Log.scala 
> 024506cd00556a0037c0b3b6b603da32968b69ab 
>   core/src/main/scala/kafka/log/LogConfig.scala 
> ca7a99e99f641b2694848b88bf4ae94657071d03 
>   core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
> 788c7864bc881b935975ab4a4e877b690e65f1f1 
>   core/src/main/scala/kafka/message/CompressionCodec.scala 
> 9439d2bc29a0c2327085f08577c6ce1b01db1489 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 6e26c5436feb4629d17f199011f3ebb674aa767f 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
>   core/src/test/scala/kafka/log/LogConfigTest.scala 
> 99b0df7b69c5e0a1b6251c54592c6ef63b1800fe 
>   core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala 
> 4e45d965bc423192ac704883ee75e9727006f89b 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 2377abe4933e065d037828a214c3a87e1773a8ef 
> 
> Diff: https://reviews.apache.org/r/24704/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>



Re: ConsumerTest

2015-02-27 Thread Guozhang Wang
Thanks Neha, I will make sure the ConsumerTest gets fixed when I check in
1910.

On Fri, Feb 27, 2015 at 2:12 PM, Neha Narkhede  wrote:

> Thanks for the quick responses. I deleted it for now, since commenting out
> code is silly given that you can revive the file from version control
> later.
>
> On Fri, Feb 27, 2015 at 2:03 PM, Joel Koshy  wrote:
>
> > +1
> >
> > On Fri, Feb 27, 2015 at 01:50:07PM -0800, Neha Narkhede wrote:
> > > Wow. That was quick :-)
> > > Any committers who would also like to give a +1?
> > >
> > > On Fri, Feb 27, 2015 at 1:44 PM, Ashish Singh 
> > wrote:
> > >
> > > > +1
> > > >
> > > > On Fri, Feb 27, 2015 at 1:42 PM, Gwen Shapira  >
> > > > wrote:
> > > >
> > > > > Actually, we will greatly appreciate :)
> > > > >
> > > > > On Fri, Feb 27, 2015 at 1:37 PM, Neha Narkhede 
> > > > wrote:
> > > > > > Would anyone object if I commented out the kafka.api.ConsumerTest
> > out
> > > > > until
> > > > > > it is fixed? It hangs and is making accepting patches very
> > > > > time-consuming.
> > > > > >
> > > > > > --
> > > > > > Thanks,
> > > > > > Neha
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Regards,
> > > > Ashish
> > > >
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Neha
> >
> >
>
>
> --
> Thanks,
> Neha
>



-- 
-- Guozhang


Jenkins build is back to normal : KafkaPreCommit #23

2015-02-27 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340993#comment-14340993
 ] 

Guozhang Wang commented on KAFKA-1988:
--

+1. Thanks Tong / Jun for fixing this and preserve compatibility at the same 
time.

Is the current o.a.k.c.c.u.Utils.abs() used anywhere else?

> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers.
> 
>
> Key: KAFKA-1988
> URL: https://issues.apache.org/jira/browse/KAFKA-1988
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1988.patch, KAFKA-1988.patch
>
>
> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers. The method only returns intended value for positive 
> numbers. All negative numbers except the Integer.Min_Value will be returned 
> an unsigned integer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1664) Kafka does not properly parse multiple ZK nodes with non-root chroot

2015-02-27 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1664:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch. Pushed to trunk.

> Kafka does not properly parse multiple ZK nodes with non-root chroot
> 
>
> Key: KAFKA-1664
> URL: https://issues.apache.org/jira/browse/KAFKA-1664
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Ricky Saltzer
>Assignee: Ashish K Singh
>Priority: Minor
>  Labels: newbie
> Attachments: KAFKA-1664.1.patch, KAFKA-1664.2.patch, 
> KAFKA-1664.patch, KAFKA-1664_2015-01-29_10:26:20.patch, 
> KAFKA-1664_2015-02-24_11:02:23.patch
>
>
> When using a non-root ZK directory for Kafka, if you specify multiple ZK 
> servers, Kafka does not seem to properly parse the connection string. 
> *Error*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka,baelish-002.edh.cloudera.com:2181/kafka,baelish-003.edh.cloudera.com:2181/kafka
>  --topic test-topic
> [2014-10-01 15:31:04,629] ERROR Error processing message, stopping consumer:  
> (kafka.consumer.ConsoleConsumer$)
> java.lang.IllegalArgumentException: Path length must be > 0
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:48)
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:35)
>   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:766)
>   at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:213)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at kafka.utils.ZkUtils$.createParentPath(ZkUtils.scala:245)
>   at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:256)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:268)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:306)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.scala:226)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:755)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:145)
>   at kafka.consumer.ConsoleConsumer$.main(ConsoleConsumer.scala:196)
>   at kafka.consumer.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}
> *Working*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka --topic test-topic
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2015-02-27 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340998#comment-14340998
 ] 

Guozhang Wang commented on KAFKA-1501:
--

Gwen, the problem is that, we seems still not been able to fix the problem with 
my and Ewen's patch together (sigh..) If you have some other ideas / proposals 
I would very much love to discuss as this issue is biting myself nearly every 
day. 

> transient unit tests failures due to port already in use
> 
>
> Key: KAFKA-1501
> URL: https://issues.apache.org/jira/browse/KAFKA-1501
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie
> Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
> KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, test-100.out, test-27.out, 
> test-29.out, test-32.out, test-35.out, test-38.out, test-4.out, test-42.out, 
> test-45.out, test-46.out, test-51.out, test-55.out, test-58.out, test-59.out, 
> test-60.out, test-69.out, test-72.out, test-74.out, test-76.out, test-84.out, 
> test-87.out, test-91.out, test-92.out
>
>
> Saw the following transient failures.
> kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne FAILED
> kafka.common.KafkaException: Socket server failed to bind to 
> localhost:59909: Address already in use.
> at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
> at kafka.network.Acceptor.(SocketServer.scala:141)
> at kafka.network.SocketServer.startup(SocketServer.scala:68)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
> at 
> kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ConsumerTest

2015-02-27 Thread Neha Narkhede
Sounds good. Thanks!

On Fri, Feb 27, 2015 at 3:01 PM, Guozhang Wang  wrote:

> Thanks Neha, I will make sure the ConsumerTest gets fixed when I check in
> 1910.
>
> On Fri, Feb 27, 2015 at 2:12 PM, Neha Narkhede  wrote:
>
> > Thanks for the quick responses. I deleted it for now, since commenting
> out
> > code is silly given that you can revive the file from version control
> > later.
> >
> > On Fri, Feb 27, 2015 at 2:03 PM, Joel Koshy  wrote:
> >
> > > +1
> > >
> > > On Fri, Feb 27, 2015 at 01:50:07PM -0800, Neha Narkhede wrote:
> > > > Wow. That was quick :-)
> > > > Any committers who would also like to give a +1?
> > > >
> > > > On Fri, Feb 27, 2015 at 1:44 PM, Ashish Singh 
> > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Fri, Feb 27, 2015 at 1:42 PM, Gwen Shapira <
> gshap...@cloudera.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Actually, we will greatly appreciate :)
> > > > > >
> > > > > > On Fri, Feb 27, 2015 at 1:37 PM, Neha Narkhede <
> n...@confluent.io>
> > > > > wrote:
> > > > > > > Would anyone object if I commented out the
> kafka.api.ConsumerTest
> > > out
> > > > > > until
> > > > > > > it is fixed? It hangs and is making accepting patches very
> > > > > > time-consuming.
> > > > > > >
> > > > > > > --
> > > > > > > Thanks,
> > > > > > > Neha
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Regards,
> > > > > Ashish
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks,
> > > > Neha
> > >
> > >
> >
> >
> > --
> > Thanks,
> > Neha
> >
>
>
>
> --
> -- Guozhang
>



-- 
Thanks,
Neha


Review Request 31566: Patch for KAFKA-1988

2015-02-27 Thread Tong Li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31566/
---

Review request for kafka.


Bugs: KAFKA-1988
https://issues.apache.org/jira/browse/KAFKA-1988


Repository: kafka


Description
---

KAFKA-1988 org.apache.kafka.common.utils.Utils.abs method returns wrong value 
for negative numbers


Diffs
-

  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
 dfb936d8f0d5842ee5c7a7f1584c5ed7463c4cf8 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
69530c187cd1c41b8173b61de6f982aafe65c9fe 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
4c2ea34815b63174732d58b699e1a0a9e6ec3b6f 

Diff: https://reviews.apache.org/r/31566/diff/


Testing
---


Thanks,

Tong Li



[jira] [Commented] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Tong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341004#comment-14341004
 ] 

Tong Li commented on KAFKA-1988:


Created reviewboard https://reviews.apache.org/r/31566/diff/
 against branch origin/trunk

> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers.
> 
>
> Key: KAFKA-1988
> URL: https://issues.apache.org/jira/browse/KAFKA-1988
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1988.patch, KAFKA-1988.patch, KAFKA-1988.patch
>
>
> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers. The method only returns intended value for positive 
> numbers. All negative numbers except the Integer.Min_Value will be returned 
> an unsigned integer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1988) org.apache.kafka.common.utils.Utils.abs method returns wrong value for negative numbers.

2015-02-27 Thread Tong Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tong Li updated KAFKA-1988:
---
Attachment: KAFKA-1988.patch

> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers.
> 
>
> Key: KAFKA-1988
> URL: https://issues.apache.org/jira/browse/KAFKA-1988
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Tong Li
>Assignee: Tong Li
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1988.patch, KAFKA-1988.patch, KAFKA-1988.patch
>
>
> org.apache.kafka.common.utils.Utils.abs method returns wrong value for 
> negative numbers. The method only returns intended value for positive 
> numbers. All negative numbers except the Integer.Min_Value will be returned 
> an unsigned integer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29647: Patch for KAFKA-1697

2015-02-27 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29647/#review74623
---



core/src/main/scala/kafka/cluster/Partition.scala


good point


- Joel Koshy


On Feb. 13, 2015, 2:57 a.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29647/
> ---
> 
> (Updated Feb. 13, 2015, 2:57 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1697
> https://issues.apache.org/jira/browse/KAFKA-1697
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> added early handling of invalid number of acks to handler and a test
> 
> 
> merging with current trunk
> 
> 
> moved check for legal requiredAcks to append and fixed the tests accordingly
> 
> 
> changing exception back to retriable
> 
> 
> cleaning unused exceptions
> 
> 
> refactored appendToLog for clarity
> 
> 
> KAFKA-1948; Fix ConsumerTest.testPartitionReassignmentCallback handling 
> issue; reviewed by Gwen Shapira
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> KAFKA-1697
> 
> 
> improved readability of append rules
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/common/errors/InvalidRequiredAcksException.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
>  a6107b818947d6d6818c85cdffcb2b13f69a55c0 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> a8deac4ce5149129d0a6f44c0526af9d55649a36 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> e6ad8be5e33b6fb31c078ad78f8de709869ddc04 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> 66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 6ee7d8819a9ef923f3a65c865a0a3d8ded8845f0 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> fb948b9ab28c516e81dab14dcbe211dcd99842b6 
>   core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
> 798f035df52e405176f558806584ce25e8c392ac 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> a1f72f8c2042ff2a43af503b2e06f84706dad9db 
>   core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
> faa907131ed0aa94a7eacb78c1ffb576062be87a 
> 
> Diff: https://reviews.apache.org/r/29647/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Gwen Shapira
> 
>



Build failed in Jenkins: Kafka-trunk #410

2015-02-27 Thread Apache Jenkins Server
See 

Changes:

[neha.narkhede] Deleting the ConsumerTest until the issue with the hanging test 
is resolved; discussed on the mailing list and got several +1s

--
[...truncated 1299 lines...]
kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.consumer.ConsumerIteratorTest > 
testConsumerIteratorDeduplicationDeepIterator PASSED

kafka.consumer.ConsumerIteratorTest > testConsumerIteratorDecodingFailure PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.MetricsTest > testMetricsLeak PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.ReplicaManagerTest > testIllegalRequiredAcks PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.SimpleFetchTest > testReadFromLog PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.ServerShutdownTest > testCleanShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled PASSED

kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup PASSED

kafka.server.ServerShutdownTest > testConsecutiveShutdown PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndHoursProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
PASSED

kafka.server.KafkaConfigTest > testAdvertiseDefaults PASSED

kafka.server.KafkaConfigTest > testAdvertiseConfigured PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testUncleanElectionDisabled PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeBothMsAndHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testDefaultCompressionType PASSED

kafka.server.KafkaConfigTest > testValidCompressionType PASSED

kafka.server.KafkaConfigTest > testInvalidCompressionType PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigC

Build failed in Jenkins: KafkaPreCommit #24

2015-02-27 Thread Apache Jenkins Server
See 

Changes:

[neha.narkhede] KAFKA-1664 Kafka does not properly parse multiple ZK nodes with 
non-root chroot; reviewed by Neha Narkhede and Jun Rao

--
[...truncated 1357 lines...]
kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.test.ProducerSendTest > testSendOffset PASSED

kafka.api.test.ProducerSendTest > testSerializer PASSED

kafka.api.test.ProducerSendTest > testClose PASSED

kafka.api.test.ProducerSendTest > testSendToPartition PASSED

kafka.api.test.ProducerSendTest > testAutoCreateTopic PASSED

kafka.api.test.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.test.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.test.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.test.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.test.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.test.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.test.ProducerFailureHandlingTest > testBrokerFailure PASSED

kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic 
PASSED

kafka.api.test.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown FAILED
org.scalatest.junit.JUnitTestFailedError: Expected 
NotEnoughReplicasException when producing to topic with fewer brokers than 
min.insync.replicas
at 
org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:102)
at 
org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:147)
at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:147)
at 
kafka.api.test.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:352)

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafk

Re: Review Request 30084: Patch for KAFKA-1866

2015-02-27 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30084/#review74632
---



core/src/test/scala/unit/kafka/metrics/MetricsTest.scala


space after ,



core/src/test/scala/unit/kafka/metrics/MetricsTest.scala


remove space after metricGroups


- Neha Narkhede


On Feb. 11, 2015, 5:25 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30084/
> ---
> 
> (Updated Feb. 11, 2015, 5:25 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1866
> https://issues.apache.org/jira/browse/KAFKA-1866
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1866. LogStartOffset gauge throws exceptions after log.delete().
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> e6ad8be5e33b6fb31c078ad78f8de709869ddc04 
>   core/src/main/scala/kafka/log/Log.scala 
> 846023bb98d0fa0603016466360c97071ac935ea 
>   core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
> 3cf23b3d6d4460535b90cfb36281714788fc681c 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 21d0ed2cb7c9459261d3cdc7c21dece5e2079698 
> 
> Diff: https://reviews.apache.org/r/30084/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



Re: Review Request 24704: Patch for KAFKA-1499

2015-02-27 Thread Jun Rao


> On Feb. 27, 2015, 10:53 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/message/CompressionCodec.scala, line 46
> > 
> >
> > Not sure what you mean - can you elaborate?

Instead of contains((compressionType.toLowerCase())), can't it just be 
contains(compressionType.toLowerCase())?


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24704/#review74615
---


On Dec. 26, 2014, 4:09 p.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24704/
> ---
> 
> (Updated Dec. 26, 2014, 4:09 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1499
> https://issues.apache.org/jira/browse/KAFKA-1499
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Support given for Broker-side compression, Addersing Joel's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/log/Log.scala 
> 024506cd00556a0037c0b3b6b603da32968b69ab 
>   core/src/main/scala/kafka/log/LogConfig.scala 
> ca7a99e99f641b2694848b88bf4ae94657071d03 
>   core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
> 788c7864bc881b935975ab4a4e877b690e65f1f1 
>   core/src/main/scala/kafka/message/CompressionCodec.scala 
> 9439d2bc29a0c2327085f08577c6ce1b01db1489 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 6e26c5436feb4629d17f199011f3ebb674aa767f 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
>   core/src/test/scala/kafka/log/LogConfigTest.scala 
> 99b0df7b69c5e0a1b6251c54592c6ef63b1800fe 
>   core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala 
> 4e45d965bc423192ac704883ee75e9727006f89b 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 2377abe4933e065d037828a214c3a87e1773a8ef 
> 
> Diff: https://reviews.apache.org/r/24704/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>



[jira] [Created] (KAFKA-1989) New purgatory design

2015-02-27 Thread Yasuhiro Matsuda (JIRA)
Yasuhiro Matsuda created KAFKA-1989:
---

 Summary: New purgatory design
 Key: KAFKA-1989
 URL: https://issues.apache.org/jira/browse/KAFKA-1989
 Project: Kafka
  Issue Type: Improvement
  Components: purgatory
Reporter: Yasuhiro Matsuda
Assignee: Joel Koshy


This is a new design of purgatory based on Hierarchical Timing Wheels.

https://cwiki.apache.org/confluence/display/KAFKA/Purgatory+Redesign+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30084: Patch for KAFKA-1866

2015-02-27 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30084/#review74635
---

Ship it!


I will include those changes during checkin to reduce back-n-forth

- Neha Narkhede


On Feb. 11, 2015, 5:25 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30084/
> ---
> 
> (Updated Feb. 11, 2015, 5:25 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1866
> https://issues.apache.org/jira/browse/KAFKA-1866
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1866. LogStartOffset gauge throws exceptions after log.delete().
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> e6ad8be5e33b6fb31c078ad78f8de709869ddc04 
>   core/src/main/scala/kafka/log/Log.scala 
> 846023bb98d0fa0603016466360c97071ac935ea 
>   core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
> 3cf23b3d6d4460535b90cfb36281714788fc681c 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 21d0ed2cb7c9459261d3cdc7c21dece5e2079698 
> 
> Diff: https://reviews.apache.org/r/30084/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



[jira] [Updated] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-02-27 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1866:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch. Pushed to trunk after fixing those minor space issues.

> LogStartOffset gauge throws exceptions after log.delete()
> -
>
> Key: KAFKA-1866
> URL: https://issues.apache.org/jira/browse/KAFKA-1866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Sriharsha Chintalapani
> Attachments: KAFKA-1866.patch, KAFKA-1866_2015-02-10_22:50:09.patch, 
> KAFKA-1866_2015-02-11_09:25:33.patch
>
>
> The LogStartOffset gauge does "logSegments.head.baseOffset", which throws 
> NoSuchElementException on an empty list, which can occur after a delete() of 
> the log. This makes life harder for custom MetricsReporters, since they have 
> to deal with .value() possibly throwing an exception.
> Locally we're dealing with this by having Log.delete() also call removeMetric 
> on all the gauges. That also has the benefit of not having a bunch of metrics 
> floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-02-27 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1866:
-
Fix Version/s: 0.8.3

> LogStartOffset gauge throws exceptions after log.delete()
> -
>
> Key: KAFKA-1866
> URL: https://issues.apache.org/jira/browse/KAFKA-1866
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gian Merlino
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1866.patch, KAFKA-1866_2015-02-10_22:50:09.patch, 
> KAFKA-1866_2015-02-11_09:25:33.patch
>
>
> The LogStartOffset gauge does "logSegments.head.baseOffset", which throws 
> NoSuchElementException on an empty list, which can occur after a delete() of 
> the log. This makes life harder for custom MetricsReporters, since they have 
> to deal with .value() possibly throwing an exception.
> Locally we're dealing with this by having Log.delete() also call removeMetric 
> on all the gauges. That also has the benefit of not having a bunch of metrics 
> floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1989) New purgatory design

2015-02-27 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1989:
-
Priority: Critical  (was: Major)

> New purgatory design
> 
>
> Key: KAFKA-1989
> URL: https://issues.apache.org/jira/browse/KAFKA-1989
> Project: Kafka
>  Issue Type: Improvement
>  Components: purgatory
>Affects Versions: 0.8.2.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Critical
>
> This is a new design of purgatory based on Hierarchical Timing Wheels.
> https://cwiki.apache.org/confluence/display/KAFKA/Purgatory+Redesign+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1989) New purgatory design

2015-02-27 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1989:
-
Assignee: Yasuhiro Matsuda  (was: Joel Koshy)

> New purgatory design
> 
>
> Key: KAFKA-1989
> URL: https://issues.apache.org/jira/browse/KAFKA-1989
> Project: Kafka
>  Issue Type: Improvement
>  Components: purgatory
>Affects Versions: 0.8.2.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> This is a new design of purgatory based on Hierarchical Timing Wheels.
> https://cwiki.apache.org/confluence/display/KAFKA/Purgatory+Redesign+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1989) New purgatory design

2015-02-27 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1989:
-
Affects Version/s: 0.8.2.0

> New purgatory design
> 
>
> Key: KAFKA-1989
> URL: https://issues.apache.org/jira/browse/KAFKA-1989
> Project: Kafka
>  Issue Type: Improvement
>  Components: purgatory
>Affects Versions: 0.8.2.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> This is a new design of purgatory based on Hierarchical Timing Wheels.
> https://cwiki.apache.org/confluence/display/KAFKA/Purgatory+Redesign+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31568: Patch for KAFKA-1989

2015-02-27 Thread Yasuhiro Matsuda

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31568/
---

Review request for kafka.


Bugs: KAFKA-1989
https://issues.apache.org/jira/browse/KAFKA-1989


Repository: kafka


Description
---

new purgatory implementation


Diffs
-

  core/src/main/scala/kafka/server/DelayedOperation.scala 
e317676b4dd5bb5ad9770930e694cd7282d5b6d5 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
586cf4caa95f58d5b2e6c7429732d25d2b3635c8 
  core/src/main/scala/kafka/utils/SinglyLinkedWeakList.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/timer/Timer.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/timer/TimerTask.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/timer/TimerTaskList.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/timer/TimingWheel.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
7a37617395b9e4226853913b8989f42e7301de7c 
  core/src/test/scala/unit/kafka/utils/timer/TimerTaskListTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/utils/timer/TimerTest.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/31568/diff/


Testing
---


Thanks,

Yasuhiro Matsuda



[jira] [Commented] (KAFKA-1989) New purgatory design

2015-02-27 Thread Yasuhiro Matsuda (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341070#comment-14341070
 ] 

Yasuhiro Matsuda commented on KAFKA-1989:
-

Created reviewboard https://reviews.apache.org/r/31568/diff/
 against branch origin/trunk

> New purgatory design
> 
>
> Key: KAFKA-1989
> URL: https://issues.apache.org/jira/browse/KAFKA-1989
> Project: Kafka
>  Issue Type: Improvement
>  Components: purgatory
>Affects Versions: 0.8.2.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Critical
> Attachments: KAFKA-1989.patch
>
>
> This is a new design of purgatory based on Hierarchical Timing Wheels.
> https://cwiki.apache.org/confluence/display/KAFKA/Purgatory+Redesign+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1989) New purgatory design

2015-02-27 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-1989:

Status: Patch Available  (was: Open)

> New purgatory design
> 
>
> Key: KAFKA-1989
> URL: https://issues.apache.org/jira/browse/KAFKA-1989
> Project: Kafka
>  Issue Type: Improvement
>  Components: purgatory
>Affects Versions: 0.8.2.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Critical
> Attachments: KAFKA-1989.patch
>
>
> This is a new design of purgatory based on Hierarchical Timing Wheels.
> https://cwiki.apache.org/confluence/display/KAFKA/Purgatory+Redesign+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1989) New purgatory design

2015-02-27 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-1989:

Attachment: KAFKA-1989.patch

> New purgatory design
> 
>
> Key: KAFKA-1989
> URL: https://issues.apache.org/jira/browse/KAFKA-1989
> Project: Kafka
>  Issue Type: Improvement
>  Components: purgatory
>Affects Versions: 0.8.2.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Critical
> Attachments: KAFKA-1989.patch
>
>
> This is a new design of purgatory based on Hierarchical Timing Wheels.
> https://cwiki.apache.org/confluence/display/KAFKA/Purgatory+Redesign+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #411

2015-02-27 Thread Apache Jenkins Server
See 

Changes:

[neha.narkhede] KAFKA-1664 Kafka does not properly parse multiple ZK nodes with 
non-root chroot; reviewed by Neha Narkhede and Jun Rao

--
[...truncated 546 lines...]
kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.test.ProducerSendTest > testAutoCreateTopic PASSED

kafka.api.test.ProducerSendTest > testSendOffset PASSED

kafka.api.test.ProducerSendTest > testSerializer PASSED

kafka.api.test.ProducerSendTest > testClose PASSED

kafka.api.test.ProducerSendTest > testSendToPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.test.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.test.ProducerFailureHandlingTest > testNonExiste

Re: Review Request 24704: Patch for KAFKA-1499

2015-02-27 Thread Joel Koshy


> On Feb. 27, 2015, 10:53 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/message/CompressionCodec.scala, line 46
> > 
> >
> > Not sure what you mean - can you elaborate?
> 
> Jun Rao wrote:
> Instead of contains((compressionType.toLowerCase())), can't it just be 
> contains(compressionType.toLowerCase())?

I still don't get it.. no I'm kidding. You're right. I clearly cannot count 
parentheses


- Joel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24704/#review74615
---


On Dec. 26, 2014, 4:09 p.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24704/
> ---
> 
> (Updated Dec. 26, 2014, 4:09 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1499
> https://issues.apache.org/jira/browse/KAFKA-1499
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Support given for Broker-side compression, Addersing Joel's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/log/Log.scala 
> 024506cd00556a0037c0b3b6b603da32968b69ab 
>   core/src/main/scala/kafka/log/LogConfig.scala 
> ca7a99e99f641b2694848b88bf4ae94657071d03 
>   core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
> 788c7864bc881b935975ab4a4e877b690e65f1f1 
>   core/src/main/scala/kafka/message/CompressionCodec.scala 
> 9439d2bc29a0c2327085f08577c6ce1b01db1489 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 6e26c5436feb4629d17f199011f3ebb674aa767f 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
>   core/src/test/scala/kafka/log/LogConfigTest.scala 
> 99b0df7b69c5e0a1b6251c54592c6ef63b1800fe 
>   core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala 
> 4e45d965bc423192ac704883ee75e9727006f89b 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 2377abe4933e065d037828a214c3a87e1773a8ef 
> 
> Diff: https://reviews.apache.org/r/24704/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>



[jira] [Commented] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2015-02-27 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341089#comment-14341089
 ] 

Neha Narkhede commented on KAFKA-1824:
--

[~gwenshap] Still doesn't apply. Not sure if I'm doing something incorrectly-
{code}
nnarkhed-mn1:kafka nnarkhed$ git apply --check 
~/Projects/kafka-patches/1824.patch 
error: patch failed: core/src/main/scala/kafka/tools/ConsoleProducer.scala:36
error: core/src/main/scala/kafka/tools/ConsoleProducer.scala: patch does not 
apply
error: patch failed: core/src/main/scala/kafka/tools/ConsoleProducer.scala:34
error: core/src/main/scala/kafka/tools/ConsoleProducer.scala: patch does not 
apply
error: core/src/test/scala/kafka/tools/ConsoleProducerTest.scala: already 
exists in working directory
error: patch failed: core/src/main/scala/kafka/tools/ConsoleProducer.scala:76
error: core/src/main/scala/kafka/tools/ConsoleProducer.scala: patch does not 
apply
error: patch failed: core/src/main/scala/kafka/tools/ConsoleProducer.scala:266
error: core/src/main/scala/kafka/tools/ConsoleProducer.scala: patch does not 
apply
{code}

> in ConsoleProducer - properties key.separator and parse.key no longer work
> --
>
> Key: KAFKA-1824
> URL: https://issues.apache.org/jira/browse/KAFKA-1824
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1824.patch, KAFKA-1824.patch, 
> KAFKA-1824_2014-12-22_16:17:42.patch, KAFKA-1824_2015-02-26_23:05:10.patch
>
>
> Looks like the change in kafka-1711 breaks them accidentally.
> reader.init is called with readerProps which is initialized with commandline 
> properties as defaults.
> the problem is that reader.init checks:
> if(props.containsKey("parse.key"))
> and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2015-02-27 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1824:

Attachment: KAFKA-1824.v1.patch

I'll blame the patch-tool.

I generated new patch with "git diff" and tested on trunk. Seems to work :)

> in ConsoleProducer - properties key.separator and parse.key no longer work
> --
>
> Key: KAFKA-1824
> URL: https://issues.apache.org/jira/browse/KAFKA-1824
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1824.patch, KAFKA-1824.patch, KAFKA-1824.v1.patch, 
> KAFKA-1824_2014-12-22_16:17:42.patch, KAFKA-1824_2015-02-26_23:05:10.patch
>
>
> Looks like the change in kafka-1711 breaks them accidentally.
> reader.init is called with readerProps which is initialized with commandline 
> properties as defaults.
> the problem is that reader.init checks:
> if(props.containsKey("parse.key"))
> and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: KafkaPreCommit #25

2015-02-27 Thread Apache Jenkins Server
See 

Changes:

[neha.narkhede] KAFKA-1866 LogStartOffset gauge throws exceptions after 
log.delete(); reviewed by Neha Narkhede

--
[...truncated 560 lines...]
kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithCleaner PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.test.ProducerSendTest > testSendOffset PASSED

kafka.api.test.ProducerSendTest > testSerializer PASSED

kafka.api.test.ProducerSendTest > testClose PASSED

kafka.api.test.ProducerSendTest > testSendToPartition PASSED

kafka.api.test.ProducerSendTest > testAutoCreateTopic PASSED

kafka.api.test.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.test.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.test.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.test.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.test.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.test.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.test.ProducerFailureHandlingTest > testBrokerFailure PASSED

kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic 
PASSED

kafka.api.test.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown FAILED
org.scalatest.junit.JUnitTestFailedError: Expected 
NotEnoughReplicasException when producing to topic with fewer brokers than 
min.insync.replicas
at 
org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
at 
org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
at org.scalatest.Assertions$class.fail(Assertions.scala:711)
at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
at 
kafka.api.test.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:352)

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.RollingBoun

Build failed in Jenkins: Kafka-trunk #412

2015-02-27 Thread Apache Jenkins Server
See 

Changes:

[neha.narkhede] KAFKA-1866 LogStartOffset gauge throws exceptions after 
log.delete(); reviewed by Neha Narkhede

--
[...truncated 534 lines...]
kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.test.ProducerSendTest > testAutoCreateTopic PASSED

kafka.api.test.ProducerSendTest > testSendOffset PASSED

kafka.api.test.ProducerSendTest > testSerializer PASSED

kafka.api.test.ProducerSendTest > testClose PASSED

kafka.api.test.ProducerSendTest > testSendToPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.test.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero 
PASSED

kafka.api.test.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.test.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.a

[jira] [Commented] (KAFKA-1910) Refactor KafkaConsumer

2015-02-27 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341201#comment-14341201
 ] 

Guozhang Wang commented on KAFKA-1910:
--

Jay,

I found my fix to KAFKA-1948 was not complete while trying to upload the RB. 
Will try to update the newest one asap.

> Refactor KafkaConsumer
> --
>
> Key: KAFKA-1910
> URL: https://issues.apache.org/jira/browse/KAFKA-1910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> KafkaConsumer now contains all the logic on the consumer side, making it a 
> very huge class file, better re-factoring it to have multiple layers on top 
> of KafkaClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1987) Potential race condition in partition creation

2015-02-27 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341226#comment-14341226
 ] 

Neha Narkhede commented on KAFKA-1987:
--

+1

> Potential race condition in partition creation
> --
>
> Key: KAFKA-1987
> URL: https://issues.apache.org/jira/browse/KAFKA-1987
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1.1
>Reporter: Todd Palino
>Assignee: Neha Narkhede
>
> I am finding that there appears to be a race condition when creating 
> partitions, with replication factor 2 or higher, between the creation of the 
> partition on the leader and the follower. What appears to be happening is 
> that the follower is processing the command to create the partition before 
> the leader does, and when the follower starts the replica fetcher, it fails 
> with an UnknownTopicOrPartitionException.
> The situation is that I am creating a large number of partitions on a 
> cluster, preparing it for data being mirrored from another cluster. So there 
> are a sizeable number of create and alter commands being sent sequentially. 
> Eventually, the replica fetchers start up properly. But it seems like the 
> controller should issue the command to create the partition to the leader, 
> wait for confirmation, and then issue the command to create the partition to 
> the followers.
> 2015/02/26 21:11:50.413 INFO [LogManager] [kafka-request-handler-12] 
> [kafka-server] [] Created log for partition [topicA,30] in 
> /path_to/i001_caches with properties {segment.index.bytes -> 10485760, 
> file.delete.delay.ms -> 6, segment.bytes -> 268435456, flush.ms -> 1, 
> delete.retention.ms -> 8640, index.interval.bytes -> 4096, 
> retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete, 
> unclean.leader.election.enable -> true, segment.ms -> 4320, 
> max.message.bytes -> 100, flush.messages -> 2, 
> min.cleanable.dirty.ratio -> 0.5, retention.ms -> 8640, segment.jitter.ms 
> -> 0}.
> 2015/02/26 21:11:50.418 WARN [Partition] [kafka-request-handler-12] 
> [kafka-server] [] Partition [topicA,30] on broker 1551: No checkpointed 
> highwatermark is found for partition [topicA,30]
> 2015/02/26 21:11:50.418 INFO [ReplicaFetcherManager] 
> [kafka-request-handler-12] [kafka-server] [] [ReplicaFetcherManager on broker 
> 1551] Removed fetcher for partitions [topicA,30]
> 2015/02/26 21:11:50.418 INFO [Log] [kafka-request-handler-12] [kafka-server] 
> [] Truncating log topicA-30 to offset 0.
> 2015/02/26 21:11:50.450 INFO [ReplicaFetcherManager] 
> [kafka-request-handler-12] [kafka-server] [] [ReplicaFetcherManager on broker 
> 1551] Added fetcher for partitions List([[topicA,30], initOffset 0 to broker 
> id:1555,host:host1555.example.com,port:10251] )
> 2015/02/26 21:11:50.615 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2015/02/26 21:11:50.616 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2015/02/26 21:11:50.618 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2015/02/26 21:11:50.620 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2015/02/26 21:11:50.621 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1987) Potential race condition in partition creation

2015-02-27 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341342#comment-14341342
 ] 

Jiangjie Qin commented on KAFKA-1987:
-

I agree with Joel that we probably can leave it as is. Imagine a cluster just 
comes up, it is still possible that a follower comes up before the leader is 
up. I remember we've handled this case.

> Potential race condition in partition creation
> --
>
> Key: KAFKA-1987
> URL: https://issues.apache.org/jira/browse/KAFKA-1987
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1.1
>Reporter: Todd Palino
>Assignee: Neha Narkhede
>
> I am finding that there appears to be a race condition when creating 
> partitions, with replication factor 2 or higher, between the creation of the 
> partition on the leader and the follower. What appears to be happening is 
> that the follower is processing the command to create the partition before 
> the leader does, and when the follower starts the replica fetcher, it fails 
> with an UnknownTopicOrPartitionException.
> The situation is that I am creating a large number of partitions on a 
> cluster, preparing it for data being mirrored from another cluster. So there 
> are a sizeable number of create and alter commands being sent sequentially. 
> Eventually, the replica fetchers start up properly. But it seems like the 
> controller should issue the command to create the partition to the leader, 
> wait for confirmation, and then issue the command to create the partition to 
> the followers.
> 2015/02/26 21:11:50.413 INFO [LogManager] [kafka-request-handler-12] 
> [kafka-server] [] Created log for partition [topicA,30] in 
> /path_to/i001_caches with properties {segment.index.bytes -> 10485760, 
> file.delete.delay.ms -> 6, segment.bytes -> 268435456, flush.ms -> 1, 
> delete.retention.ms -> 8640, index.interval.bytes -> 4096, 
> retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete, 
> unclean.leader.election.enable -> true, segment.ms -> 4320, 
> max.message.bytes -> 100, flush.messages -> 2, 
> min.cleanable.dirty.ratio -> 0.5, retention.ms -> 8640, segment.jitter.ms 
> -> 0}.
> 2015/02/26 21:11:50.418 WARN [Partition] [kafka-request-handler-12] 
> [kafka-server] [] Partition [topicA,30] on broker 1551: No checkpointed 
> highwatermark is found for partition [topicA,30]
> 2015/02/26 21:11:50.418 INFO [ReplicaFetcherManager] 
> [kafka-request-handler-12] [kafka-server] [] [ReplicaFetcherManager on broker 
> 1551] Removed fetcher for partitions [topicA,30]
> 2015/02/26 21:11:50.418 INFO [Log] [kafka-request-handler-12] [kafka-server] 
> [] Truncating log topicA-30 to offset 0.
> 2015/02/26 21:11:50.450 INFO [ReplicaFetcherManager] 
> [kafka-request-handler-12] [kafka-server] [] [ReplicaFetcherManager on broker 
> 1551] Added fetcher for partitions List([[topicA,30], initOffset 0 to broker 
> id:1555,host:host1555.example.com,port:10251] )
> 2015/02/26 21:11:50.615 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2015/02/26 21:11:50.616 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2015/02/26 21:11:50.618 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2015/02/26 21:11:50.620 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2015/02/26 21:11:50.621 ERROR [ReplicaFetcherThread] 
> [ReplicaFetcherThread-0-1555] [kafka-server] [] 
> [ReplicaFetcherThread-0-1555], Error for partition [topicA,30] to broker 
> 1555:class kafka.common.UnknownTopicOrPartitionException
> 2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)