[jira] [Commented] (KAFKA-291) Add builder to create configs for consumer and broker
[ https://issues.apache.org/jira/browse/KAFKA-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859341#comment-13859341 ] Swapnil Ghike commented on KAFKA-291: - Not sure if the ConfigBuilder is going to reduce complexity. The strings like "localhost:2181" will generally not be hardcoded in java code and will be passed from some config map, this passing of config values will have the same caveats. > Add builder to create configs for consumer and broker > - > > Key: KAFKA-291 > URL: https://issues.apache.org/jira/browse/KAFKA-291 > Project: Kafka > Issue Type: Improvement > Components: core >Affects Versions: 0.7 >Reporter: John Wang > Attachments: builderPatch.diff > > > Creating Consumer and Producer can be cumbersome because you have to remember > the exact string for the property to be set. And since these are just > strings, IDEs cannot really help. > This patch contains builders that help with this. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: Review Request 16022: OfflinePartitionCount in JMX can be incorrect during controlled shutdown
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16022/#review29776 --- Ship it! Ship It! - Swapnil Ghike On Dec. 4, 2013, 11:42 p.m., Jun Rao wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/16022/ > --- > > (Updated Dec. 4, 2013, 11:42 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1168 > https://issues.apache.org/jira/browse/KAFKA-1168 > > > Repository: kafka > > > Description > --- > > kafka-1168 > > > Diffs > - > > core/src/main/scala/kafka/controller/KafkaController.scala > a1e0f2978d2825b58746987c523e70b51b81d289 > > Diff: https://reviews.apache.org/r/16022/diff/ > > > Testing > --- > > > Thanks, > > Jun Rao > >
[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1152: - Attachment: incremental.patch Attached a patch to fix logging statement. > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch, > KAFKA-1152_2013-11-28_22:40:55.patch, incremental.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15901: Patch for KAFKA-1152
> On Dec. 2, 2013, 5:06 p.m., Neha Narkhede wrote: > > core/src/main/scala/kafka/server/ReplicaManager.scala, line 358 > > <https://reviews.apache.org/r/15901/diff/3/?file=392523#file392523line358> > > > > the check should probably be leaderId >= 0. The "leaders" in the > > LeaderAndIsrRequest is misleading, cannot be trusted and needs to be > > deprecated. On the controller, leaders exclude shutdown brokers. val leaders = controllerContext.liveOrShuttingDownBrokers.filter(b => leaderIds.contains(b.id)) On the broker, should we not check whether the leader that it is being asked to follow is alive or not? - Swapnil --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15901/#review29588 ------- On Nov. 29, 2013, 6:41 a.m., Swapnil Ghike wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15901/ > --- > > (Updated Nov. 29, 2013, 6:41 a.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1152 > https://issues.apache.org/jira/browse/KAFKA-1152 > > > Repository: kafka > > > Description > --- > > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > > > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > > > Diffs > - > > core/src/main/scala/kafka/server/ReplicaManager.scala > 161f58134f20f9335dbd2bee6ac3f71897cbef7c > > Diff: https://reviews.apache.org/r/15901/diff/ > > > Testing > --- > > Builds with all scala versions; unit tests pass > > > Thanks, > > Swapnil Ghike > >
[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1152: - Attachment: (was: KAFKA-1152.patch) > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch, > KAFKA-1152_2013-11-28_22:40:55.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835205#comment-13835205 ] Swapnil Ghike commented on KAFKA-1152: -- Updated reviewboard https://reviews.apache.org/r/15901/ against branch origin/trunk > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch, > KAFKA-1152_2013-11-28_22:40:55.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1152: - Attachment: KAFKA-1152_2013-11-28_22:40:55.patch > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch, > KAFKA-1152_2013-11-28_22:40:55.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15901: Patch for KAFKA-1152
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15901/ --- (Updated Nov. 29, 2013, 6:41 a.m.) Review request for kafka. Bugs: KAFKA-1152 https://issues.apache.org/jira/browse/KAFKA-1152 Repository: kafka Description (updated) --- ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1 Diffs (updated) - core/src/main/scala/kafka/server/ReplicaManager.scala 161f58134f20f9335dbd2bee6ac3f71897cbef7c Diff: https://reviews.apache.org/r/15901/diff/ Testing --- Builds with all scala versions; unit tests pass Thanks, Swapnil Ghike
[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1152: - Attachment: KAFKA-1152.patch > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch, KAFKA-1152.patch, > KAFKA-1152_2013-11-28_10:19:05.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835198#comment-13835198 ] Swapnil Ghike commented on KAFKA-1152: -- Created reviewboard https://reviews.apache.org/r/15915/ against branch origin/trunk > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch, KAFKA-1152.patch, > KAFKA-1152_2013-11-28_10:19:05.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
Review Request 15915: Patch for KAFKA-1152
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15915/ --- Review request for kafka. Bugs: KAFKA-1152 https://issues.apache.org/jira/browse/KAFKA-1152 Repository: kafka Description --- ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1 Diffs - core/src/main/scala/kafka/server/ReplicaManager.scala 161f58134f20f9335dbd2bee6ac3f71897cbef7c Diff: https://reviews.apache.org/r/15915/diff/ Testing --- Thanks, Swapnil Ghike
[jira] [Commented] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835022#comment-13835022 ] Swapnil Ghike commented on KAFKA-1152: -- Updated reviewboard https://reviews.apache.org/r/15901/ against branch origin/trunk > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1152: - Attachment: KAFKA-1152_2013-11-28_10:19:05.patch > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15901: Patch for KAFKA-1152
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15901/ --- (Updated Nov. 28, 2013, 6:19 p.m.) Review request for kafka. Summary (updated) - Patch for KAFKA-1152 Bugs: KAFKA-1152 https://issues.apache.org/jira/browse/KAFKA-1152 Repository: kafka Description --- ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1 Diffs (updated) - core/src/main/scala/kafka/server/ReplicaManager.scala 161f58134f20f9335dbd2bee6ac3f71897cbef7c Diff: https://reviews.apache.org/r/15901/diff/ Testing --- Builds with all scala versions; unit tests pass Thanks, Swapnil Ghike
Re: Review Request 15901: ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15901/ --- (Updated Nov. 28, 2013, 6:36 a.m.) Review request for kafka. Bugs: KAFKA-1152 https://issues.apache.org/jira/browse/KAFKA-1152 Repository: kafka Description --- ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1 Diffs - core/src/main/scala/kafka/server/ReplicaManager.scala 161f58134f20f9335dbd2bee6ac3f71897cbef7c Diff: https://reviews.apache.org/r/15901/diff/ Testing (updated) --- Builds with all scala versions; unit tests pass Thanks, Swapnil Ghike
[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1152: - Attachment: KAFKA-1152.patch > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
[ https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834544#comment-13834544 ] Swapnil Ghike commented on KAFKA-1152: -- Created reviewboard https://reviews.apache.org/r/15901/ against branch origin/trunk > ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle > leader == -1 > -- > > Key: KAFKA-1152 > URL: https://issues.apache.org/jira/browse/KAFKA-1152 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1152.patch > > > If a partition is created with replication factor 1, then the controller can > set the partition's leader to -1 in leaderAndIsrRequest when the only replica > of the partition is being bounced. > The handling of this request with a leader == -1 throws an exception on the > ReplicaManager which prevents the addition of fetchers for the remaining > partitions in the leaderAndIsrRequest. > After the replica is bounced, the replica first receives a > leaderAndIsrRequest with leader == -1, then it receives another > leaderAndIsrRequest with the correct leader (which is the replica itself) due > to OfflinePartition to OnlinePartition state change. > In handling the first request, ReplicaManager should ignore the partition for > which the request has leader == -1, and continue addition of fetchers for the > remaining partitions. The next leaderAndIsrRequest will take care of setting > the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
Review Request 15901: ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15901/ --- Review request for kafka. Bugs: KAFKA-1152 https://issues.apache.org/jira/browse/KAFKA-1152 Repository: kafka Description --- ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1 Diffs - core/src/main/scala/kafka/server/ReplicaManager.scala 161f58134f20f9335dbd2bee6ac3f71897cbef7c Diff: https://reviews.apache.org/r/15901/diff/ Testing --- Builds Thanks, Swapnil Ghike
[jira] [Created] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1
Swapnil Ghike created KAFKA-1152: Summary: ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1 Key: KAFKA-1152 URL: https://issues.apache.org/jira/browse/KAFKA-1152 Project: Kafka Issue Type: Bug Affects Versions: 0.8 Reporter: Swapnil Ghike Assignee: Swapnil Ghike Fix For: 0.8.1 If a partition is created with replication factor 1, then the controller can set the partition's leader to -1 in leaderAndIsrRequest when the only replica of the partition is being bounced. The handling of this request with a leader == -1 throws an exception on the ReplicaManager which prevents the addition of fetchers for the remaining partitions in the leaderAndIsrRequest. After the replica is bounced, the replica first receives a leaderAndIsrRequest with leader == -1, then it receives another leaderAndIsrRequest with the correct leader (which is the replica itself) due to OfflinePartition to OnlinePartition state change. In handling the first request, ReplicaManager should ignore the partition for which the request has leader == -1, and continue addition of fetchers for the remaining partitions. The next leaderAndIsrRequest will take care of setting the correct leader for that partition. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
[ https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832040#comment-13832040 ] Swapnil Ghike commented on KAFKA-1135: -- [~jjkoshy], does the above issue look similar to KAFKA-1142? > Code cleanup - use Json.encode() to write json data to zk > - > > Key: KAFKA-1135 > URL: https://issues.apache.org/jira/browse/KAFKA-1135 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch, > KAFKA-1135_2013-11-18_19:20:58.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
[ https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832037#comment-13832037 ] Swapnil Ghike commented on KAFKA-1135: -- Thanks for catching this David! Jun, it seems that the diff in the reviewboard and what got attached to this JIRA is different. Can you please revert commit 9b0776d157afd9eacddb84a99f2420fa9c0d505b, download the diff from the reviewboard and commit it? > Code cleanup - use Json.encode() to write json data to zk > - > > Key: KAFKA-1135 > URL: https://issues.apache.org/jira/browse/KAFKA-1135 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch, > KAFKA-1135_2013-11-18_19:20:58.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15711: Patch for KAFKA-930
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15711/#review29207 --- core/src/main/scala/kafka/controller/KafkaController.scala <https://reviews.apache.org/r/15711/#comment56354> Should we always delete the admin path? Because if auto rebalance achieved leader balance, then the manual rebalance has no work to do anyways. core/src/main/scala/kafka/controller/KafkaController.scala <https://reviews.apache.org/r/15711/#comment56356> rename to allReplicasForTopicPartitionsPerBroker? (I saw the per convention used somewhere else) core/src/main/scala/kafka/controller/KafkaController.scala <https://reviews.apache.org/r/15711/#comment56355> rename to topicPartitionsNotLedByPreferredReplica? core/src/main/scala/kafka/controller/KafkaController.scala <https://reviews.apache.org/r/15711/#comment56353> we should be able to pass the entire set of partitions in one call, right? core/src/main/scala/kafka/server/KafkaConfig.scala <https://reviews.apache.org/r/15711/#comment56357> Will it be simpler to have a per cluster config instead of a per broker config? i cant think of any downsides. - Swapnil Ghike On Nov. 20, 2013, 1:38 a.m., Sriram Subramanian wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15711/ > --- > > (Updated Nov. 20, 2013, 1:38 a.m.) > > > Review request for kafka. > > > Bugs: KAFKA-930 > https://issues.apache.org/jira/browse/KAFKA-930 > > > Repository: kafka > > > Description > --- > > commit missing code > > > some more changes > > > fix merge conflicts > > > Add auto leader rebalance support > > > Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into > trunk > > > Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into > trunk > > Conflicts: > core/src/main/scala/kafka/admin/AdminUtils.scala > core/src/main/scala/kafka/admin/TopicCommand.scala > > change comments > > > commit the remaining changes > > > Move AddPartitions into TopicCommand > > > Diffs > - > > core/src/main/scala/kafka/controller/KafkaController.scala > 88792c2b2a360e928ab9cd00de151e5d5f94452d > core/src/main/scala/kafka/server/KafkaConfig.scala > b324344d0a383398db8bfe2cbeec2c1378fe13c9 > > Diff: https://reviews.apache.org/r/15711/diff/ > > > Testing > --- > > > Thanks, > > Sriram Subramanian > >
[jira] [Commented] (KAFKA-1117) tool for checking the consistency among replicas
[ https://issues.apache.org/jira/browse/KAFKA-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828146#comment-13828146 ] Swapnil Ghike commented on KAFKA-1117: -- Hey Jun, after committing this patch, builds with scala 2.10.* are breaking, could you please take a look: [error] /home/sghike/kafka-server/kafka-server_trunk/kafka/core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala:364: ambiguous reference to overloaded definition, [error] both constructor FetchResponsePartitionData in class FetchResponsePartitionData of type (messages: kafka.message.MessageSet)kafka.api.FetchResponsePartitionData [error] and constructor FetchResponsePartitionData in class FetchResponsePartitionData of type (error: Short, hw: Long, messages: kafka.message.MessageSet)kafka.api.FetchResponsePartitionData [error] match argument types (messages: kafka.message.ByteBufferMessageSet) and expected result type kafka.api.FetchResponsePartitionData [error] replicaBuffer.addFetchedData(topicAndPartition, sourceBroker.id, new FetchResponsePartitionData(messages = MessageSet.Empty)) [error] > tool for checking the consistency among replicas > > > Key: KAFKA-1117 > URL: https://issues.apache.org/jira/browse/KAFKA-1117 > Project: Kafka > Issue Type: New Feature > Components: core >Affects Versions: 0.8.1 >Reporter: Jun Rao >Assignee: Jun Rao > Fix For: 0.8.1 > > Attachments: KAFKA-1117.patch, KAFKA-1117_2013-11-11_08:44:25.patch, > KAFKA-1117_2013-11-12_08:34:53.patch, KAFKA-1117_2013-11-14_08:24:41.patch, > KAFKA-1117_2013-11-18_09:58:23.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15665: Patch for KAFKA-1135
> On Nov. 19, 2013, 5:21 p.m., Guozhang Wang wrote: > > core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala, line 52 > > <https://reviews.apache.org/r/15665/diff/3/?file=388317#file388317line52> > > > > Can we make the version number a global variable, so that when we > > upgrade in the future we only need to upgrade in once place? > > Swapnil Ghike wrote: > My understanding was that the code may evolve to deal with situations > wherein we have some zookeeper paths that are on version n, and others are on > version n' and some more are on version n''. To make this explicit, I wonder > if it makes sense to let each zookeeper path have its own version value and > not put any global value that everyone else refers to. > > But I could be wrong, comments? > > Guozhang Wang wrote: > Probably I did not make it clear, I was suggesting we put a > currentVersion in the kafka code, and use this variable for all writes in ZK, > for reads from ZK, we may need a swtich/case for different versions. How does > that sound? I see. Let's say we change the zookeeper data version for one path and update the currentVersion. This will update the version of all other data in zookeeper even though the format for other data did not really change. I would like to avoid this, but what do you think? - Swapnil --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15665/#review29116 --- On Nov. 19, 2013, 3:21 a.m., Swapnil Ghike wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15665/ > --- > > (Updated Nov. 19, 2013, 3:21 a.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1135 > https://issues.apache.org/jira/browse/KAFKA-1135 > > > Repository: kafka > > > Description > --- > > iteration 2 > > > json.encode > > > Diffs > - > > core/src/main/scala/kafka/admin/AdminUtils.scala > 8ff4bd5a5f6ea1a51df926c31155251bcc109238 > core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala > 26beb9698422ceb6cc682b86913b4f9d2d4f1307 > core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala > 981d2bbecf2fa11f1d2c423535c7c30851d2d7bb > core/src/main/scala/kafka/consumer/TopicCount.scala > a3eb53e8262115d1184cd1c7a2b47f21c22c077b > core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala > c0350cd05cf1b59866a1fedccbeb700b3e828d44 > core/src/main/scala/kafka/controller/KafkaController.scala > 88792c2b2a360e928ab9cd00de151e5d5f94452d > core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala > 33b73609b1178c56e692fb60e35aca04ad1af586 > core/src/main/scala/kafka/utils/Utils.scala > c9ca95f1937d0ef2e64c70e4d811a0d4f358d9db > core/src/main/scala/kafka/utils/ZkUtils.scala > 856d13605b0b4bf86010571eacbacc0fb0ba7950 > > Diff: https://reviews.apache.org/r/15665/diff/ > > > Testing > --- > > Verified that zookeeper data looks like the structures defined in > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper > > > Thanks, > > Swapnil Ghike > >
Re: Review Request 15665: Patch for KAFKA-1135
> On Nov. 19, 2013, 5:21 p.m., Guozhang Wang wrote: > > core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala, line 52 > > <https://reviews.apache.org/r/15665/diff/3/?file=388317#file388317line52> > > > > Can we make the version number a global variable, so that when we > > upgrade in the future we only need to upgrade in once place? My understanding was that the code may evolve to deal with situations wherein we have some zookeeper paths that are on version n, and others are on version n' and some more are on version n''. To make this explicit, I wonder if it makes sense to let each zookeeper path have its own version value and not put any global value that everyone else refers to. But I could be wrong, comments? - Swapnil --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15665/#review29116 ------- On Nov. 19, 2013, 3:21 a.m., Swapnil Ghike wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15665/ > --- > > (Updated Nov. 19, 2013, 3:21 a.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1135 > https://issues.apache.org/jira/browse/KAFKA-1135 > > > Repository: kafka > > > Description > --- > > iteration 2 > > > json.encode > > > Diffs > - > > core/src/main/scala/kafka/admin/AdminUtils.scala > 8ff4bd5a5f6ea1a51df926c31155251bcc109238 > core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala > 26beb9698422ceb6cc682b86913b4f9d2d4f1307 > core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala > 981d2bbecf2fa11f1d2c423535c7c30851d2d7bb > core/src/main/scala/kafka/consumer/TopicCount.scala > a3eb53e8262115d1184cd1c7a2b47f21c22c077b > core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala > c0350cd05cf1b59866a1fedccbeb700b3e828d44 > core/src/main/scala/kafka/controller/KafkaController.scala > 88792c2b2a360e928ab9cd00de151e5d5f94452d > core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala > 33b73609b1178c56e692fb60e35aca04ad1af586 > core/src/main/scala/kafka/utils/Utils.scala > c9ca95f1937d0ef2e64c70e4d811a0d4f358d9db > core/src/main/scala/kafka/utils/ZkUtils.scala > 856d13605b0b4bf86010571eacbacc0fb0ba7950 > > Diff: https://reviews.apache.org/r/15665/diff/ > > > Testing > --- > > Verified that zookeeper data looks like the structures defined in > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper > > > Thanks, > > Swapnil Ghike > >
[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
[ https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826133#comment-13826133 ] Swapnil Ghike commented on KAFKA-1135: -- Updated reviewboard https://reviews.apache.org/r/15665/ against branch origin/trunk > Code cleanup - use Json.encode() to write json data to zk > - > > Key: KAFKA-1135 > URL: https://issues.apache.org/jira/browse/KAFKA-1135 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch, > KAFKA-1135_2013-11-18_19:20:58.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
[ https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1135: - Attachment: KAFKA-1135_2013-11-18_19:20:58.patch > Code cleanup - use Json.encode() to write json data to zk > - > > Key: KAFKA-1135 > URL: https://issues.apache.org/jira/browse/KAFKA-1135 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch, > KAFKA-1135_2013-11-18_19:20:58.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15665: Patch for KAFKA-1135
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15665/ --- (Updated Nov. 19, 2013, 3:21 a.m.) Review request for kafka. Bugs: KAFKA-1135 https://issues.apache.org/jira/browse/KAFKA-1135 Repository: kafka Description (updated) --- iteration 2 json.encode Diffs (updated) - core/src/main/scala/kafka/admin/AdminUtils.scala 8ff4bd5a5f6ea1a51df926c31155251bcc109238 core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala 26beb9698422ceb6cc682b86913b4f9d2d4f1307 core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 981d2bbecf2fa11f1d2c423535c7c30851d2d7bb core/src/main/scala/kafka/consumer/TopicCount.scala a3eb53e8262115d1184cd1c7a2b47f21c22c077b core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala c0350cd05cf1b59866a1fedccbeb700b3e828d44 core/src/main/scala/kafka/controller/KafkaController.scala 88792c2b2a360e928ab9cd00de151e5d5f94452d core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala 33b73609b1178c56e692fb60e35aca04ad1af586 core/src/main/scala/kafka/utils/Utils.scala c9ca95f1937d0ef2e64c70e4d811a0d4f358d9db core/src/main/scala/kafka/utils/ZkUtils.scala 856d13605b0b4bf86010571eacbacc0fb0ba7950 Diff: https://reviews.apache.org/r/15665/diff/ Testing --- Verified that zookeeper data looks like the structures defined in https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper Thanks, Swapnil Ghike
[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
[ https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826128#comment-13826128 ] Swapnil Ghike commented on KAFKA-1135: -- Updated reviewboard https://reviews.apache.org/r/15665/ against branch origin/trunk > Code cleanup - use Json.encode() to write json data to zk > - > > Key: KAFKA-1135 > URL: https://issues.apache.org/jira/browse/KAFKA-1135 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15665: Patch for KAFKA-1135
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15665/ --- (Updated Nov. 19, 2013, 3:17 a.m.) Review request for kafka. Summary (updated) - Patch for KAFKA-1135 Bugs: KAFKA-1135 https://issues.apache.org/jira/browse/KAFKA-1135 Repository: kafka Description --- json.encode Diffs (updated) - core/src/main/scala/kafka/admin/AdminUtils.scala 8ff4bd5a5f6ea1a51df926c31155251bcc109238 core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala 26beb9698422ceb6cc682b86913b4f9d2d4f1307 core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 981d2bbecf2fa11f1d2c423535c7c30851d2d7bb core/src/main/scala/kafka/consumer/TopicCount.scala a3eb53e8262115d1184cd1c7a2b47f21c22c077b core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala c0350cd05cf1b59866a1fedccbeb700b3e828d44 core/src/main/scala/kafka/controller/KafkaController.scala 88792c2b2a360e928ab9cd00de151e5d5f94452d core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala 33b73609b1178c56e692fb60e35aca04ad1af586 core/src/main/scala/kafka/utils/Utils.scala c9ca95f1937d0ef2e64c70e4d811a0d4f358d9db core/src/main/scala/kafka/utils/ZkUtils.scala 856d13605b0b4bf86010571eacbacc0fb0ba7950 Diff: https://reviews.apache.org/r/15665/diff/ Testing --- Verified that zookeeper data looks like the structures defined in https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper Thanks, Swapnil Ghike
[jira] [Updated] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
[ https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1135: - Attachment: KAFKA-1135_2013-11-18_19:17:54.patch > Code cleanup - use Json.encode() to write json data to zk > - > > Key: KAFKA-1135 > URL: https://issues.apache.org/jira/browse/KAFKA-1135 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
[ https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1135: - Attachment: KAFKA-1135.patch > Code cleanup - use Json.encode() to write json data to zk > - > > Key: KAFKA-1135 > URL: https://issues.apache.org/jira/browse/KAFKA-1135 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1135.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15665: Code clean: use Json.encode() to write json data to zookeeper
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15665/ --- (Updated Nov. 19, 2013, 3:16 a.m.) Review request for kafka. Bugs: KAFKA-1135 https://issues.apache.org/jira/browse/KAFKA-1135 Repository: kafka Description --- json.encode Diffs - core/src/main/scala/kafka/admin/AdminUtils.scala 8ff4bd5a5f6ea1a51df926c31155251bcc109238 core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala 26beb9698422ceb6cc682b86913b4f9d2d4f1307 core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 981d2bbecf2fa11f1d2c423535c7c30851d2d7bb core/src/main/scala/kafka/consumer/TopicCount.scala a3eb53e8262115d1184cd1c7a2b47f21c22c077b core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala c0350cd05cf1b59866a1fedccbeb700b3e828d44 core/src/main/scala/kafka/controller/KafkaController.scala 88792c2b2a360e928ab9cd00de151e5d5f94452d core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala 33b73609b1178c56e692fb60e35aca04ad1af586 core/src/main/scala/kafka/utils/Utils.scala c9ca95f1937d0ef2e64c70e4d811a0d4f358d9db core/src/main/scala/kafka/utils/ZkUtils.scala 856d13605b0b4bf86010571eacbacc0fb0ba7950 Diff: https://reviews.apache.org/r/15665/diff/ Testing (updated) --- Verified that zookeeper data looks like the structures defined in https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper Thanks, Swapnil Ghike
[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
[ https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13826116#comment-13826116 ] Swapnil Ghike commented on KAFKA-1135: -- Created reviewboard https://reviews.apache.org/r/15665/ against branch origin/trunk > Code cleanup - use Json.encode() to write json data to zk > - > > Key: KAFKA-1135 > URL: https://issues.apache.org/jira/browse/KAFKA-1135 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1135.patch > > -- This message was sent by Atlassian JIRA (v6.1#6144)
Review Request 15665: Code clean: use Json.encode() to write json data to zookeeper
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15665/ --- Review request for kafka. Bugs: KAFKA-1135 https://issues.apache.org/jira/browse/KAFKA-1135 Repository: kafka Description --- json.encode Diffs - core/src/main/scala/kafka/admin/AdminUtils.scala 8ff4bd5a5f6ea1a51df926c31155251bcc109238 core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala 26beb9698422ceb6cc682b86913b4f9d2d4f1307 core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 981d2bbecf2fa11f1d2c423535c7c30851d2d7bb core/src/main/scala/kafka/consumer/TopicCount.scala a3eb53e8262115d1184cd1c7a2b47f21c22c077b core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala c0350cd05cf1b59866a1fedccbeb700b3e828d44 core/src/main/scala/kafka/controller/KafkaController.scala 88792c2b2a360e928ab9cd00de151e5d5f94452d core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala 33b73609b1178c56e692fb60e35aca04ad1af586 core/src/main/scala/kafka/utils/Utils.scala c9ca95f1937d0ef2e64c70e4d811a0d4f358d9db core/src/main/scala/kafka/utils/ZkUtils.scala 856d13605b0b4bf86010571eacbacc0fb0ba7950 Diff: https://reviews.apache.org/r/15665/diff/ Testing --- Verified Thanks, Swapnil Ghike
[jira] [Created] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk
Swapnil Ghike created KAFKA-1135: Summary: Code cleanup - use Json.encode() to write json data to zk Key: KAFKA-1135 URL: https://issues.apache.org/jira/browse/KAFKA-1135 Project: Kafka Issue Type: Bug Reporter: Swapnil Ghike Assignee: Swapnil Ghike Fix For: 0.8.1 -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15201: address all review comments
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15201/#review28873 --- Ship it! Ship It! - Swapnil Ghike On Nov. 14, 2013, 4:24 p.m., Jun Rao wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15201/ > --- > > (Updated Nov. 14, 2013, 4:24 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1117 > https://issues.apache.org/jira/browse/KAFKA-1117 > > > Repository: kafka > > > Description > --- > > kafka-1117; fix 4 > > > kafka-1117; fix 3 > > > kafka-1117; fix 2 > > > kafka-1117; fix 1 > > > kafka-1117 > > > Diffs > - > > core/src/main/scala/kafka/api/OffsetResponse.scala > 08dc3cd3d166efba6b2b43f6e148f636b175affe > core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala PRE-CREATION > > Diff: https://reviews.apache.org/r/15201/diff/ > > > Testing > --- > > > Thanks, > > Jun Rao > >
Re: Review Request 15201: address more review comments
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15201/#review28781 --- core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55792> fetchsize -> fetch-size for consistency with other options? core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55793> We can probably change this to ConsumerConfig.FetchSize. Anytime we change the max message size on the broker, we will probably change default fetch size on consumer, so that can serve as the source of truth for this tool as well. core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55794> Should we just do .replaceAll("""["']""", "") instead? core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55799> val fetcherThreads = for ((i, element) <- topicAndPartitionsPerBroker.view.zipWithIndex) yield new ReplicaFetcher(, doVerification = if (i == 0) true else false) to avoid variable = true and then variable = false? core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55807> Minor comment: Can we rename this to initialOffsetMap (we use the offsetMap name in ReplicaFetchThread), I got confused on the first glace.. core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55805> I thought this loop is supposed to go through all the messages that can be returned by the messageIterator, but looks like it will check for only the first message obtained via each messageIterator. core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55806> Wondering why we don't exit when checksums don't match? core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55808> typo core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala <https://reviews.apache.org/r/15201/#comment55809> typo Another caveat seems to be that the tool cannot handle changes in 1. partition leadership change 2. topic configuration change (number of partitions). - Swapnil Ghike On Nov. 12, 2013, 4:34 p.m., Jun Rao wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15201/ > --- > > (Updated Nov. 12, 2013, 4:34 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1117 > https://issues.apache.org/jira/browse/KAFKA-1117 > > > Repository: kafka > > > Description > --- > > kafka-1117; fix 3 > > > kafka-1117; fix 2 > > > kafka-1117; fix 1 > > > kafka-1117 > > > Diffs > - > > config/tools-log4j.properties 79240490149835656e2a013a9702c5aa41c104f1 > core/src/main/scala/kafka/api/OffsetResponse.scala > 08dc3cd3d166efba6b2b43f6e148f636b175affe > core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala PRE-CREATION > > Diff: https://reviews.apache.org/r/15201/diff/ > > > Testing > --- > > > Thanks, > > Jun Rao > >
Re: Review Request 15274: Patch for KAFKA-1119
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15274/#review28440 --- core/src/main/scala/kafka/admin/TopicCommand.scala <https://reviews.apache.org/r/15274/#comment55256> Should we enforce that configsToBeAdded and configsToBeDeleted should not contain the same config? - Swapnil Ghike On Nov. 7, 2013, 6:17 p.m., Neha Narkhede wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15274/ > --- > > (Updated Nov. 7, 2013, 6:17 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1119 > https://issues.apache.org/jira/browse/KAFKA-1119 > > > Repository: kafka > > > Description > --- > > Addressed Joel's review comments > > > Add an explicit --deleteConfig option > > > more cleanup > > > Code cleanup > > > Fixed bug that now allows removing all the topic overrides correctly and > falling back to the defaults > > > 1. Change the --config to diff the previous configs 2. Add the ability to > remove per topic overrides if config is specified without a value > > > Diffs > - > > core/src/main/scala/kafka/admin/AdminUtils.scala > 8107a64cf1ef1cac763e152bae9f835411c9d3f3 > core/src/main/scala/kafka/admin/TopicCommand.scala > 56f3177e28a34df0ace1d192aef0060cb5e235df > core/src/main/scala/kafka/log/LogConfig.scala > 51ec796e9e6a10b76daefbd9aea02121fc1d573a > core/src/main/scala/kafka/server/TopicConfigManager.scala > 56cae58f2a216dcba88b2656fd4a490f11461270 > > Diff: https://reviews.apache.org/r/15274/diff/ > > > Testing > --- > > Locally tested - 1. Adding new config 2. Adding new invalid config 3. > Deleting config 4. Deleting all config overrides > > > Thanks, > > Neha Narkhede > >
Re: Review Request 15274: Patch for KAFKA-1119
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15274/#review28327 --- It will help to add a --deleteConfig option. We can use it as --config config1=newVal --deleteConfig config2. - Swapnil Ghike On Nov. 6, 2013, 6:13 p.m., Neha Narkhede wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15274/ > --- > > (Updated Nov. 6, 2013, 6:13 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1119 > https://issues.apache.org/jira/browse/KAFKA-1119 > > > Repository: kafka > > > Description > --- > > more cleanup > > > Code cleanup > > > Fixed bug that now allows removing all the topic overrides correctly and > falling back to the defaults > > > 1. Change the --config to diff the previous configs 2. Add the ability to > remove per topic overrides if config is specified without a value > > > Diffs > - > > core/src/main/scala/kafka/admin/AdminUtils.scala > 8107a64cf1ef1cac763e152bae9f835411c9d3f3 > core/src/main/scala/kafka/admin/TopicCommand.scala > 56f3177e28a34df0ace1d192aef0060cb5e235df > core/src/main/scala/kafka/log/LogConfig.scala > 51ec796e9e6a10b76daefbd9aea02121fc1d573a > core/src/main/scala/kafka/server/TopicConfigManager.scala > 56cae58f2a216dcba88b2656fd4a490f11461270 > > Diff: https://reviews.apache.org/r/15274/diff/ > > > Testing > --- > > > Thanks, > > Neha Narkhede > >
[jira] [Commented] (KAFKA-1121) DumpLogSegments tool should print absolute file name to report inconsistencies
[ https://issues.apache.org/jira/browse/KAFKA-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814308#comment-13814308 ] Swapnil Ghike commented on KAFKA-1121: -- Created reviewboard https://reviews.apache.org/r/15248/ against branch origin/trunk > DumpLogSegments tool should print absolute file name to report inconsistencies > -- > > Key: KAFKA-1121 > URL: https://issues.apache.org/jira/browse/KAFKA-1121 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1121.patch > > > Normally, the user would know where the index file lies. But in case of a > script that continuously checks the index files for consistency, it will help > to have the absolute file path printed in the output. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1121) DumpLogSegments tool should print absolute file name to report inconsistencies
[ https://issues.apache.org/jira/browse/KAFKA-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1121: - Attachment: KAFKA-1121.patch > DumpLogSegments tool should print absolute file name to report inconsistencies > -- > > Key: KAFKA-1121 > URL: https://issues.apache.org/jira/browse/KAFKA-1121 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8.1 > > Attachments: KAFKA-1121.patch > > > Normally, the user would know where the index file lies. But in case of a > script that continuously checks the index files for consistency, it will help > to have the absolute file path printed in the output. -- This message was sent by Atlassian JIRA (v6.1#6144)
Review Request 15248: DumpLogSegments should print absolute file path while printing errors
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15248/ --- Review request for kafka. Bugs: KAFKA-1121 https://issues.apache.org/jira/browse/KAFKA-1121 Repository: kafka Description --- dumplogseg Diffs - core/src/main/scala/kafka/tools/DumpLogSegments.scala 89b6cb1d0c3d9a1335184d0fc778246ce47738d3 Diff: https://reviews.apache.org/r/15248/diff/ Testing --- Thanks, Swapnil Ghike
[jira] [Updated] (KAFKA-1121) DumpLogSegments tool should print absolute file name to report inconsistencies
[ https://issues.apache.org/jira/browse/KAFKA-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1121: - Fix Version/s: 0.8.1 > DumpLogSegments tool should print absolute file name to report inconsistencies > -- > > Key: KAFKA-1121 > URL: https://issues.apache.org/jira/browse/KAFKA-1121 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8.1 > > > Normally, the user would know where the index file lies. But in case of a > script that continuously checks the index files for consistency, it will help > to have the absolute file path printed in the output. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1121) DumpLogSegments tool should print absolute file name to report inconsistencies
[ https://issues.apache.org/jira/browse/KAFKA-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1121: - Affects Version/s: 0.8 > DumpLogSegments tool should print absolute file name to report inconsistencies > -- > > Key: KAFKA-1121 > URL: https://issues.apache.org/jira/browse/KAFKA-1121 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8.1 > > > Normally, the user would know where the index file lies. But in case of a > script that continuously checks the index files for consistency, it will help > to have the absolute file path printed in the output. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (KAFKA-1121) DumpLogSegments tool should print absolute file name to report inconsistencies
Swapnil Ghike created KAFKA-1121: Summary: DumpLogSegments tool should print absolute file name to report inconsistencies Key: KAFKA-1121 URL: https://issues.apache.org/jira/browse/KAFKA-1121 Project: Kafka Issue Type: Bug Reporter: Swapnil Ghike Assignee: Swapnil Ghike Normally, the user would know where the index file lies. But in case of a script that continuously checks the index files for consistency, it will help to have the absolute file path printed in the output. -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 15137: Patch for KAFKA-1107
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15137/#review27989 --- Ship it! Ship It! - Swapnil Ghike On Oct. 31, 2013, 10:28 p.m., Neha Narkhede wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15137/ > --- > > (Updated Oct. 31, 2013, 10:28 p.m.) > > > Review request for kafka and Jay Kreps. > > > Bugs: KAFKA-1107 > https://issues.apache.org/jira/browse/KAFKA-1107 > > > Repository: kafka > > > Description > --- > > Remove whitespace changes to LogTest > > > Removed an unncessary log statement and fixed a few more tests that initially > had the Log public API change > > > Per Jay's suggestion, avoid changing the public API of Log > > > Avoid printing clean shutdown file log message on every log > > > KAFKA-1107 Broker unnecessarily recovers all logs when upgrading from 0.8 to > 0.8.1; Fix includes modifying Log to avoid recovery if the clean shutdown > file exists; LogManager deletes the clean shutdown file after all logs in a > log directory have finished loading; If the clean shutdown file does not > exist, fall back to 0.8.1 recovery logic > > > Diffs > - > > core/src/main/scala/kafka/log/Log.scala > 0cc402b13e8484ae5569f1b8ff7156331a2f82d7 > core/src/main/scala/kafka/log/LogManager.scala > d489e08452ab97334d504f76f381eb314ec56901 > core/src/test/scala/unit/kafka/log/LogTest.scala > 140317c6ab6741308d125e9c1f43078b672c5f95 > > Diff: https://reviews.apache.org/r/15137/diff/ > > > Testing > --- > > > Thanks, > > Neha Narkhede > >
Re: Review Request 15137: Patch for KAFKA-1107
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/15137/#review27988 --- core/src/test/scala/unit/kafka/log/LogTest.scala <https://reviews.apache.org/r/15137/#comment5> probably don't need this statement. - Swapnil Ghike On Oct. 31, 2013, 10:28 p.m., Neha Narkhede wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/15137/ > --- > > (Updated Oct. 31, 2013, 10:28 p.m.) > > > Review request for kafka and Jay Kreps. > > > Bugs: KAFKA-1107 > https://issues.apache.org/jira/browse/KAFKA-1107 > > > Repository: kafka > > > Description > --- > > Remove whitespace changes to LogTest > > > Removed an unncessary log statement and fixed a few more tests that initially > had the Log public API change > > > Per Jay's suggestion, avoid changing the public API of Log > > > Avoid printing clean shutdown file log message on every log > > > KAFKA-1107 Broker unnecessarily recovers all logs when upgrading from 0.8 to > 0.8.1; Fix includes modifying Log to avoid recovery if the clean shutdown > file exists; LogManager deletes the clean shutdown file after all logs in a > log directory have finished loading; If the clean shutdown file does not > exist, fall back to 0.8.1 recovery logic > > > Diffs > - > > core/src/main/scala/kafka/log/Log.scala > 0cc402b13e8484ae5569f1b8ff7156331a2f82d7 > core/src/main/scala/kafka/log/LogManager.scala > d489e08452ab97334d504f76f381eb314ec56901 > core/src/test/scala/unit/kafka/log/LogTest.scala > 140317c6ab6741308d125e9c1f43078b672c5f95 > > Diff: https://reviews.apache.org/r/15137/diff/ > > > Testing > --- > > > Thanks, > > Neha Narkhede > >
[jira] [Comment Edited] (KAFKA-918) Change log.retention.hours to be log.retention.mins
[ https://issues.apache.org/jira/browse/KAFKA-918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805392#comment-13805392 ] Swapnil Ghike edited comment on KAFKA-918 at 10/25/13 3:49 PM: --- We should get the property in one line instead of using a method, like Utils.parseCsvList(props.getString("log.dirs", props.getString("log.dir", "/tmp/kafka-logs"))). was (Author: swapnilghike): I think it's confusing to allow both configs in 'if .. else', one could plug in both configs and would need to open up the logs to see which property is getting used. Is it because we want to maintain backward compatibility? > Change log.retention.hours to be log.retention.mins > --- > > Key: KAFKA-918 > URL: https://issues.apache.org/jira/browse/KAFKA-918 > Project: Kafka > Issue Type: New Feature > Components: config >Affects Versions: 0.7.2 >Reporter: Jason Weiss >Assignee: Alin Vasile > Labels: features, newbie > Fix For: 0.8.1 > > Attachments: issue_918.patch > > > We stood up a cluster that is processing over 350,000 events per second, with > each event a fixed payload size of 2K. The storage required to process that > much data over an hour is beyond what we wanted to pay for at AWS. > Additionally, we don't have a requirement to keep the files around for an > extended period after processing. > It would be tremendously valuable for us to be able to define the > log.retention in minutes, not hours. For example, we would prefer to only > keep 30 minutes of logs around. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-918) Change log.retention.hours to be log.retention.mins
[ https://issues.apache.org/jira/browse/KAFKA-918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805392#comment-13805392 ] Swapnil Ghike commented on KAFKA-918: - I think it's confusing to allow both configs in 'if .. else', one could plug in both configs and would need to open up the logs to see which property is getting used. Is it because we want to maintain backward compatibility? > Change log.retention.hours to be log.retention.mins > --- > > Key: KAFKA-918 > URL: https://issues.apache.org/jira/browse/KAFKA-918 > Project: Kafka > Issue Type: New Feature > Components: config >Affects Versions: 0.7.2 >Reporter: Jason Weiss >Assignee: Alin Vasile > Labels: features, newbie > Fix For: 0.8.1 > > Attachments: issue_918.patch > > > We stood up a cluster that is processing over 350,000 events per second, with > each event a fixed payload size of 2K. The storage required to process that > much data over an hour is beyond what we wanted to pay for at AWS. > Additionally, we don't have a requirement to keep the files around for an > extended period after processing. > It would be tremendously valuable for us to be able to define the > log.retention in minutes, not hours. For example, we would prefer to only > keep 30 minutes of logs around. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1100) metrics shouldn't have generation/timestamp specific names
[ https://issues.apache.org/jira/browse/KAFKA-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13803660#comment-13803660 ] Swapnil Ghike commented on KAFKA-1100: -- That makes sense Joel, we could also use the clientId to differentiate between two consumerConnectors that start up on the same host with the same group. > metrics shouldn't have generation/timestamp specific names > -- > > Key: KAFKA-1100 > URL: https://issues.apache.org/jira/browse/KAFKA-1100 > Project: Kafka > Issue Type: Bug >Reporter: Jason Rosenberg > > I've noticed that there are several metrics that seem useful for monitoring > overtime, but which contain generational timestamps in the metric name. > We are using yammer metrics libraries to send metrics data in a background > thread every 10 seconds (to kafka actually), and then they eventually end up > in a metrics database (graphite, opentsdb). The metrics then get graphed via > UI, and we can see metrics going way back, etc. > Unfortunately, many of the metrics coming from kafka seem to have metric > names that change any time the server or consumer is restarted, which makes > it hard to easily create graphs over long periods of time (spanning app > restarts). > For example: > names like: > kafka.consumer.FetchRequestAndResponseMetricssquare-1371718712833-e9bb4d10-0-508818741-AllBrokersFetchRequestRateAndTimeMs > or: > kafka.consumer.ZookeeperConsumerConnector...topicName.square-1373476779391-78aa2e83-0-FetchQueueSize > In our staging environment, we have our servers on regular auto-deploy cycles > (they restart every few hours). So just not longitudinally usable to have > metric names constantly changing like this. > Is there something that can easily be done? Is it really necessary to have > so much cryptic info in the metric name? -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1100) metrics shouldn't have generation/timestamp specific names
[ https://issues.apache.org/jira/browse/KAFKA-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13802635#comment-13802635 ] Swapnil Ghike commented on KAFKA-1100: -- Hi Jason, at LinkedIn, we use wildcards/regexes to create graphs from such mbeans. Would you be able to do something similar? > metrics shouldn't have generation/timestamp specific names > -- > > Key: KAFKA-1100 > URL: https://issues.apache.org/jira/browse/KAFKA-1100 > Project: Kafka > Issue Type: Bug >Reporter: Jason Rosenberg > > I've noticed that there are several metrics that seem useful for monitoring > overtime, but which contain generational timestamps in the metric name. > We are using yammer metrics libraries to send metrics data in a background > thread every 10 seconds (to kafka actually), and then they eventually end up > in a metrics database (graphite, opentsdb). The metrics then get graphed via > UI, and we can see metrics going way back, etc. > Unfortunately, many of the metrics coming from kafka seem to have metric > names that change any time the server or consumer is restarted, which makes > it hard to easily create graphs over long periods of time (spanning app > restarts). > For example: > names like: > kafka.consumer.FetchRequestAndResponseMetricssquare-1371718712833-e9bb4d10-0-508818741-AllBrokersFetchRequestRateAndTimeMs > or: > kafka.consumer.ZookeeperConsumerConnector...topicName.square-1373476779391-78aa2e83-0-FetchQueueSize > In our staging environment, we have our servers on regular auto-deploy cycles > (they restart every few hours). So just not longitudinally usable to have > metric names constantly changing like this. > Is there something that can easily be done? Is it really necessary to have > so much cryptic info in the metric name? -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (KAFKA-1096) An old controller coming out of long GC could update its epoch to the latest controller's epoch
Swapnil Ghike created KAFKA-1096: Summary: An old controller coming out of long GC could update its epoch to the latest controller's epoch Key: KAFKA-1096 URL: https://issues.apache.org/jira/browse/KAFKA-1096 Project: Kafka Issue Type: Bug Affects Versions: 0.8 Reporter: Swapnil Ghike If a controller GCs for too long, we could have two controllers in the cluster. The controller epoch is supposed to minimize the damage in such a situation, as the brokers will reject the requests sent by the controller with an older epoch. When the old controller is still in long GC, a new controller could be elected. This will fire ControllerEpochListener on the old controller. When it comes out of GC, its ControllerEpochListener will update its own epoch to the new controller's epoch. So both controllers are now able to send out requests with the same controller epoch until the old controller's handleNewSession() can execute in the controller lock. ControllerEpochListener does not seem necessary, so we can probably delete it. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1094) Configure reviewboard url in kafka-patch-review tool
[ https://issues.apache.org/jira/browse/KAFKA-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800024#comment-13800024 ] Swapnil Ghike commented on KAFKA-1094: -- Same patch will work for trunk. > Configure reviewboard url in kafka-patch-review tool > > > Key: KAFKA-1094 > URL: https://issues.apache.org/jira/browse/KAFKA-1094 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1094.patch > > > If someone forgets to configure review board, then the tool uploads a patch > without creating an RB. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1094) Configure reviewboard url in kafka-patch-review tool
[ https://issues.apache.org/jira/browse/KAFKA-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800023#comment-13800023 ] Swapnil Ghike commented on KAFKA-1094: -- Created reviewboard https://reviews.apache.org/r/14773/ > Configure reviewboard url in kafka-patch-review tool > > > Key: KAFKA-1094 > URL: https://issues.apache.org/jira/browse/KAFKA-1094 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1094.patch > > > If someone forgets to configure review board, then the tool uploads a patch > without creating an RB. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1094) Configure reviewboard url in kafka-patch-review tool
[ https://issues.apache.org/jira/browse/KAFKA-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1094: - Attachment: KAFKA-1094.patch > Configure reviewboard url in kafka-patch-review tool > > > Key: KAFKA-1094 > URL: https://issues.apache.org/jira/browse/KAFKA-1094 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1094.patch > > > If someone forgets to configure review board, then the tool uploads a patch > without creating an RB. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (KAFKA-1094) Configure reviewboard url in kafka-patch-review tool
Swapnil Ghike created KAFKA-1094: Summary: Configure reviewboard url in kafka-patch-review tool Key: KAFKA-1094 URL: https://issues.apache.org/jira/browse/KAFKA-1094 Project: Kafka Issue Type: Bug Affects Versions: 0.8 Reporter: Swapnil Ghike Assignee: Swapnil Ghike Attachments: KAFKA-1094.patch If someone forgets to configure review board, then the tool uploads a patch without creating an RB. -- This message was sent by Atlassian JIRA (v6.1#6144)
Review Request 14773: Configure reviewboard url in kafka-patch-review tool
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14773/ --- Review request for kafka. Bugs: KAFKA-1094 https://issues.apache.org/jira/browse/KAFKA-1094 Repository: kafka Description --- configure reviewboard Diffs - kafka-patch-review.py 2653465a30a0084cbd37fa07d00e07134ef3bd7f Diff: https://reviews.apache.org/r/14773/diff/ Testing --- Thanks, Swapnil Ghike
[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Attachment: (was: KAFKA-1093.patch) > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800021#comment-13800021 ] Swapnil Ghike commented on KAFKA-1093: -- Created reviewboard https://reviews.apache.org/r/14772/ > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Attachment: KAFKA-1093.patch > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Issue Comment Deleted] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Comment: was deleted (was: Created reviewboard https://reviews.apache.org/r/14772/ ) > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
Review Request 14772: Configure reviewboard url in kafka-patch-review tool
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14772/ --- Review request for kafka. Bugs: KAFKA-1093 https://issues.apache.org/jira/browse/KAFKA-1093 Repository: kafka Description --- configure reviewboard Diffs - kafka-patch-review.py 2653465a30a0084cbd37fa07d00e07134ef3bd7f Diff: https://reviews.apache.org/r/14772/diff/ Testing --- Thanks, Swapnil Ghike
[jira] [Issue Comment Deleted] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Comment: was deleted (was: Created reviewboard ) > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Attachment: (was: KAFKA-1093.patch) > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 14771: Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14771/ --- (Updated Oct. 19, 2013, 10:34 p.m.) Review request for kafka. Bugs: KAFKA-1093 https://issues.apache.org/jira/browse/KAFKA-1093 Repository: kafka Description --- Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t Diffs - core/src/main/scala/kafka/log/Log.scala f6348969ea38258065f4de358bfbf3f20b4eb74a Diff: https://reviews.apache.org/r/14771/diff/ Testing (updated) --- Unit tests pass. Thanks, Swapnil Ghike
[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Attachment: KAFKA-1093.patch > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800018#comment-13800018 ] Swapnil Ghike commented on KAFKA-1093: -- Created reviewboard https://reviews.apache.org/r/14771/ > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
Review Request 14771: Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14771/ --- Review request for kafka. Bugs: KAFKA-1093 https://issues.apache.org/jira/browse/KAFKA-1093 Repository: kafka Description --- Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t Diffs - core/src/main/scala/kafka/log/Log.scala f6348969ea38258065f4de358bfbf3f20b4eb74a Diff: https://reviews.apache.org/r/14771/diff/ Testing --- Thanks, Swapnil Ghike
[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Attachment: KAFKA-1093.patch > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800013#comment-13800013 ] Swapnil Ghike commented on KAFKA-1093: -- Created reviewboard > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1093.patch > > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Description: Let's say there are three log segments s1, s2, s3. In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, s3.lastModified), (logEndOffset, currentTimeMs)]. Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will return Seq(s2.start). However, we already know s3.firstAppendTime (s3.created in trunk). So, if s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. This also resolves another bug wherein the log has only one segment and getOffsetsBefore() returns an empty Seq if the timestamp provided is less than the lastModified of the only segment. We should rather return the startOffset of the segment if the timestamp is greater than the firstAppendTime of the segment. was: Let's say there are three log segments s1, s2, s3. In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, s3.lastModified), (logEndOffset, currentTimeMs)]. Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will return Seq(s2.start). However, we already know s3.firstAppendTime. So, if s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. This also resolves another bug wherein the log has only one segment and getOffsetsBefore() returns an empty Seq if the timestamp provided is less than the lastModified of the only segment. We should rather return the startOffset of the segment if the timestamp is greater than the firstAppendTime of the segment. > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike >Assignee: Swapnil Ghike > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime (s3.created in trunk). So, if > s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
Swapnil Ghike created KAFKA-1093: Summary: Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t Key: KAFKA-1093 URL: https://issues.apache.org/jira/browse/KAFKA-1093 Project: Kafka Issue Type: Bug Reporter: Swapnil Ghike Assignee: Swapnil Ghike Let's say there are three log segments s1, s2, s3. In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, s3.lastModified), (logEndOffset, currentTimeMs)]. Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will return Seq(s2.start). However, we already know s3.firstAppendTime. So, if s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. This also resolves another bug wherein the log has only one segment and getOffsetsBefore() returns an empty Seq if the timestamp provided is less than the lastModified of the only segment. We should rather return the startOffset of the segment if the timestamp is greater than the firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
[ https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1093: - Affects Version/s: 0.8 > Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t > - > > Key: KAFKA-1093 > URL: https://issues.apache.org/jira/browse/KAFKA-1093 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > > Let's say there are three log segments s1, s2, s3. > In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - > [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, > s3.lastModified), (logEndOffset, currentTimeMs)]. > Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will > return Seq(s2.start). > However, we already know s3.firstAppendTime. So, if s3.firstAppendTime < t < > s3.lastModified, we should rather return s3.start. > This also resolves another bug wherein the log has only one segment and > getOffsetsBefore() returns an empty Seq if the timestamp provided is less > than the lastModified of the only segment. We should rather return the > startOffset of the segment if the timestamp is greater than the > firstAppendTime of the segment. -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 14676: Patch for KAFKA-1091
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14676/#review27072 --- Ship it! Ship It! - Swapnil Ghike On Oct. 16, 2013, 5:19 p.m., Jun Rao wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/14676/ > --- > > (Updated Oct. 16, 2013, 5:19 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1091 > https://issues.apache.org/jira/browse/KAFKA-1091 > > > Repository: kafka > > > Description > --- > > kafka-1091 > > > Diffs > - > > core/src/main/scala/kafka/server/KafkaApis.scala > 338d1cc6533fd219941f2afb9bc0ea122b368bbe > > Diff: https://reviews.apache.org/r/14676/diff/ > > > Testing > --- > > > Thanks, > > Jun Rao > >
[jira] [Commented] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics
[ https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13794966#comment-13794966 ] Swapnil Ghike commented on KAFKA-1087: -- Same patch should apply fine to trunk. > Empty topic list causes consumer to fetch metadata of all topics > > > Key: KAFKA-1087 > URL: https://issues.apache.org/jira/browse/KAFKA-1087 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1087.patch > > > The ClientUtils fetches metadata for all topics if the topic set is empty. > If the topic list of a consumer is empty, the following happens if a > rebalance is triggered: > - The fetcher is restarted, fetcher.startConnections() starts a > LeaderFinderThread > - LeaderFinderThread waits on a condition > - fetcher.startConnections() signals the aforementioned condition > - LeaderFinderThread obtains metadata for all topics since the topic list is > empty. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics
[ https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1087: - Description: The ClientUtils fetches metadata for all topics if the topic set is empty. If the topic list of a consumer is empty, the following happens if a rebalance is triggered: - The fetcher is restarted, fetcher.startConnections() starts a LeaderFinderThread - LeaderFinderThread waits on a condition - fetcher.startConnections() signals the aforementioned condition - LeaderFinderThread obtains metadata for all topics since the topic list is empty. was: The ClientUtils fetches metadata for all topics if the topic set is empty. If the topic list of a consumer is empty, the following happens if a rebalance is triggered: - The fetcher is restarted, it starts a LeaderFinderThread - LeaderFinderThread waits on a condition - fetcher.startConnections() signals the aforementioned condition - LeaderFinderThread obtains metadata for all topics since the topic list is empty. > Empty topic list causes consumer to fetch metadata of all topics > > > Key: KAFKA-1087 > URL: https://issues.apache.org/jira/browse/KAFKA-1087 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1087.patch > > > The ClientUtils fetches metadata for all topics if the topic set is empty. > If the topic list of a consumer is empty, the following happens if a > rebalance is triggered: > - The fetcher is restarted, fetcher.startConnections() starts a > LeaderFinderThread > - LeaderFinderThread waits on a condition > - fetcher.startConnections() signals the aforementioned condition > - LeaderFinderThread obtains metadata for all topics since the topic list is > empty. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics
[ https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1087: - Description: The ClientUtils fetches metadata for all topics if the topic set is empty. If the topic list of a consumer is empty, the following happens if a rebalance is triggered: - The fetcher is restarted, it starts a LeaderFinderThread - LeaderFinderThread waits on a condition - fetcher.startConnections() signals the aforementioned condition - LeaderFinderThread obtains metadata for all topics since the topic list is empty. was: The ClientUtils fetches metadata for all topics if the topic set is empty. If the topic list of a consumer is empty, the following happens if a rebalance is triggered: - LeaderFinderThread waits on a condition - The fetcher is restarted and it signals the aforementioned condition - LeaderFinderThread obtains metadata for all topics since the topic list is empty. > Empty topic list causes consumer to fetch metadata of all topics > > > Key: KAFKA-1087 > URL: https://issues.apache.org/jira/browse/KAFKA-1087 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1087.patch > > > The ClientUtils fetches metadata for all topics if the topic set is empty. > If the topic list of a consumer is empty, the following happens if a > rebalance is triggered: > - The fetcher is restarted, it starts a LeaderFinderThread > - LeaderFinderThread waits on a condition > - fetcher.startConnections() signals the aforementioned condition > - LeaderFinderThread obtains metadata for all topics since the topic list is > empty. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics
[ https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1087: - Attachment: KAFKA-1087.patch Unit tests pass. > Empty topic list causes consumer to fetch metadata of all topics > > > Key: KAFKA-1087 > URL: https://issues.apache.org/jira/browse/KAFKA-1087 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Attachments: KAFKA-1087.patch > > > The ClientUtils fetches metadata for all topics if the topic set is empty. > If the topic list of a consumer is empty, the following happens if a > rebalance is triggered: > - LeaderFinderThread waits on a condition > - The fetcher is restarted and it signals the aforementioned condition > - LeaderFinderThread obtains metadata for all topics since the topic list is > empty. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics
[ https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1087: - Affects Version/s: 0.8 > Empty topic list causes consumer to fetch metadata of all topics > > > Key: KAFKA-1087 > URL: https://issues.apache.org/jira/browse/KAFKA-1087 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > > The ClientUtils fetches metadata for all topics if the topic set is empty. > If the topic list of a consumer is empty, the following happens if a > rebalance is triggered: > - LeaderFinderThread waits on a condition > - The fetcher is restarted and it signals the aforementioned condition > - LeaderFinderThread obtains metadata for all topics since the topic list is > empty. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics
Swapnil Ghike created KAFKA-1087: Summary: Empty topic list causes consumer to fetch metadata of all topics Key: KAFKA-1087 URL: https://issues.apache.org/jira/browse/KAFKA-1087 Project: Kafka Issue Type: Bug Reporter: Swapnil Ghike Assignee: Swapnil Ghike The ClientUtils fetches metadata for all topics if the topic set is empty. If the topic list of a consumer is empty, the following happens if a rebalance is triggered: - LeaderFinderThread waits on a condition - The fetcher is restarted and it signals the aforementioned condition - LeaderFinderThread obtains metadata for all topics since the topic list is empty. -- This message was sent by Atlassian JIRA (v6.1#6144)
Re: Review Request 14496: incorporating review feedback
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14496/#review26714 --- core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala <https://reviews.apache.org/r/14496/#comment52036> Instead of asking the user to store the output of a dryrun into a JSON file, should the tool compute the dryrun output and use it to perform validation ? - Swapnil Ghike On Oct. 5, 2013, 6:02 p.m., Jun Rao wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/14496/ > --- > > (Updated Oct. 5, 2013, 6:02 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1073 > https://issues.apache.org/jira/browse/KAFKA-1073 > > > Repository: kafka > > > Description > --- > > kafka-1017; incorporating review feedback > > > kafka-1017 > > > Diffs > - > > bin/kafka-check-reassignment-status.sh > 1f218585cb8bf58d8a85af38f368a49b27e5 > core/src/main/scala/kafka/admin/CheckReassignmentStatus.scala > 7e85f87e96dbddf4fd8785ae3960e8fe4813e8e5 > core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala > f333d29bf36bb7fdc66b3bf9af16e7ee19ad7e48 > > Diff: https://reviews.apache.org/r/14496/diff/ > > > Testing > --- > > > Thanks, > > Jun Rao > >
Re: Review Request 14496: Patch for KAFKA-1073
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14496/#review26701 --- Ship it! Do you think incorporating this functionality in the ReassignPartitions tool itself with a "checkStatus" mode make sense in trunk? - Swapnil Ghike On Oct. 4, 2013, 9:33 p.m., Jun Rao wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/14496/ > --- > > (Updated Oct. 4, 2013, 9:33 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1073 > https://issues.apache.org/jira/browse/KAFKA-1073 > > > Repository: kafka > > > Description > --- > > kafka-1017 > > > Diffs > - > > core/src/main/scala/kafka/admin/CheckReassignmentStatus.scala > 7e85f87e96dbddf4fd8785ae3960e8fe4813e8e5 > core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala > f333d29bf36bb7fdc66b3bf9af16e7ee19ad7e48 > > Diff: https://reviews.apache.org/r/14496/diff/ > > > Testing > --- > > > Thanks, > > Jun Rao > >
[jira] [Commented] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic
[ https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769884#comment-13769884 ] Swapnil Ghike commented on KAFKA-1030: -- +1 that Guozhang, thanks for running the tests. > Addition of partitions requires bouncing all the consumers of that topic > > > Key: KAFKA-1030 > URL: https://issues.apache.org/jira/browse/KAFKA-1030 > Project: Kafka > Issue Type: Bug >Affects Versions: 0.8 >Reporter: Swapnil Ghike >Assignee: Guozhang Wang >Priority: Blocker > Fix For: 0.8 > > Attachments: KAFKA-1030-v1.patch > > > Consumer may not notice new partitions because the propagation of the > metadata to servers can be delayed. > Options: > 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new > partition data from zookeeper instead of a kafka server. > 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to > smallest once the consumer has started. > 1 sounds easier to do. If 1 causes long delays in reading all partitions at > the start of every rebalance, 2 may be worth considering. > > The same issue affects MirrorMaker when new topics are created, MirrorMaker > may not notice all partitions of the new topics until the next rebalance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Review Request 14041: Patch for KAFKA-1030
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14041/#review26179 --- Ship it! Ship It! - Swapnil Ghike On Sept. 17, 2013, 6 p.m., Guozhang Wang wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/14041/ > --- > > (Updated Sept. 17, 2013, 6 p.m.) > > > Review request for kafka. > > > Bugs: KAFKA-1030 > https://issues.apache.org/jira/browse/KAFKA-1030 > > > Repository: kafka > > > Description > --- > > Using the approach of reading directly from ZK. > > > Diffs > - > > core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala > 81bf0bda3229e94ecb6b6aff3ffc9fde852df61b > > Diff: https://reviews.apache.org/r/14041/diff/ > > > Testing > --- > > unit tests > > > Thanks, > > Guozhang Wang > >
[jira] [Issue Comment Deleted] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1003: - Comment: was deleted (was: Created reviewboard https://reviews.apache.org/r/14161/ ) > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768778#comment-13768778 ] Swapnil Ghike commented on KAFKA-1003: -- Created reviewboard https://reviews.apache.org/r/14161/ > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1003: - Attachment: (was: KAFKA-1003.patch) > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1003: - Attachment: KAFKA-1003.patch > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Review Request 14161: Patch for KAFKA-1003
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14161/ --- Review request for kafka. Bugs: KAFKA-1003 https://issues.apache.org/jira/browse/KAFKA-1003 Repository: kafka Description --- test Diffs - core/src/main/scala/kafka/producer/ProducerConfig.scala 7947b18aceb297f51adc0edcb1a11a447ca83e5f Diff: https://reviews.apache.org/r/14161/diff/ Testing --- Thanks, Swapnil Ghike
[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1003: - Attachment: (was: KAFKA-1003.patch) > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1003: - Attachment: (was: KAFKA-1003.patch) > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1003: - Attachment: KAFKA-1003.patch > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768770#comment-13768770 ] Swapnil Ghike commented on KAFKA-1003: -- Created reviewboard > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768771#comment-13768771 ] Swapnil Ghike commented on KAFKA-1003: -- Created reviewboard > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug >Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1003: - Attachment: (was: KAFKA-1003_2013-09-16_14:13:04.patch) > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager
[ https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-1003: - Attachment: KAFKA-1003.patch > ConsumerFetcherManager should pass clientId as metricsPrefix to > AbstractFetcherManager > -- > > Key: KAFKA-1003 > URL: https://issues.apache.org/jira/browse/KAFKA-1003 > Project: Kafka > Issue Type: Bug > Reporter: Swapnil Ghike > Assignee: Swapnil Ghike > Fix For: 0.8 > > Attachments: kafka-1003.patch > > > For consistency. We use clientId in the metric names elsewhere on clients. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (KAFKA-42) Support rebalancing the partitions with replication
[ https://issues.apache.org/jira/browse/KAFKA-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-42: --- Attachment: (was: KAFKA-42.patch) > Support rebalancing the partitions with replication > --- > > Key: KAFKA-42 > URL: https://issues.apache.org/jira/browse/KAFKA-42 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Jun Rao >Assignee: Neha Narkhede >Priority: Blocker > Labels: features > Fix For: 0.8 > > Attachments: kafka-42-v1.patch, kafka-42-v2.patch, kafka-42-v3.patch, > kafka-42-v4.patch, kafka-42-v5.patch > > Original Estimate: 240h > Remaining Estimate: 240h > > As new brokers are added, we need to support moving partition replicas from > one set of brokers to another, online. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (KAFKA-42) Support rebalancing the partitions with replication
[ https://issues.apache.org/jira/browse/KAFKA-42?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768747#comment-13768747 ] Swapnil Ghike commented on KAFKA-42: Created reviewboard > Support rebalancing the partitions with replication > --- > > Key: KAFKA-42 > URL: https://issues.apache.org/jira/browse/KAFKA-42 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Jun Rao >Assignee: Neha Narkhede >Priority: Blocker > Labels: features > Fix For: 0.8 > > Attachments: KAFKA-42.patch, kafka-42-v1.patch, kafka-42-v2.patch, > kafka-42-v3.patch, kafka-42-v4.patch, kafka-42-v5.patch > > Original Estimate: 240h > Remaining Estimate: 240h > > As new brokers are added, we need to support moving partition replicas from > one set of brokers to another, online. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (KAFKA-42) Support rebalancing the partitions with replication
[ https://issues.apache.org/jira/browse/KAFKA-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapnil Ghike updated KAFKA-42: --- Attachment: KAFKA-42.patch > Support rebalancing the partitions with replication > --- > > Key: KAFKA-42 > URL: https://issues.apache.org/jira/browse/KAFKA-42 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Jun Rao >Assignee: Neha Narkhede >Priority: Blocker > Labels: features > Fix For: 0.8 > > Attachments: KAFKA-42.patch, kafka-42-v1.patch, kafka-42-v2.patch, > kafka-42-v3.patch, kafka-42-v4.patch, kafka-42-v5.patch > > Original Estimate: 240h > Remaining Estimate: 240h > > As new brokers are added, we need to support moving partition replicas from > one set of brokers to another, online. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira