[jira] [Commented] (KAFKA-2920) Storage module
[ https://issues.apache.org/jira/browse/KAFKA-2920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15056278#comment-15056278 ] Andrii Biletskyi commented on KAFKA-2920: - The idea is to implement plug-gable interface of the Storage module defined by https://cwiki.apache.org/confluence/display/KAFKA/KIP-30+-+Allow+for+brokers+to+have+plug-able+consensus+and+meta+data+storage+sub+systems (got to Storage tab). Current zkUtils should be transformed so it becomes interface implementation. There are some unclear things though. Like what exception contract we need to follow. E.g. when we are trying to update value that doesn't exist or insert value that already exists. > Storage module > -- > > Key: KAFKA-2920 > URL: https://issues.apache.org/jira/browse/KAFKA-2920 > Project: Kafka > Issue Type: Sub-task > Components: core > Reporter: Andrii Biletskyi > > Add Storage module and implement it on top of Zookeeper as a default version. > Replace ZkUtils calls with calls to Storage interface. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2921) Plug-able implementations support
[ https://issues.apache.org/jira/browse/KAFKA-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2921: Attachment: KIP-30-LE-WIP.patch > Plug-able implementations support > - > > Key: KAFKA-2921 > URL: https://issues.apache.org/jira/browse/KAFKA-2921 > Project: Kafka > Issue Type: Sub-task > Components: core > Reporter: Andrii Biletskyi > Attachments: KIP-30-LE-WIP.patch > > > Add infrastructure to support plug-able implementations in runtime. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2916) KIP-30 Plug-able interface for consensus and storage sub-systems
Andrii Biletskyi created KAFKA-2916: --- Summary: KIP-30 Plug-able interface for consensus and storage sub-systems Key: KAFKA-2916 URL: https://issues.apache.org/jira/browse/KAFKA-2916 Project: Kafka Issue Type: New Feature Components: core Reporter: Andrii Biletskyi Checklist for KIP-30 (https://cwiki.apache.org/confluence/display/KAFKA/KIP-30+-+Allow+for+brokers+to+have+plug-able+consensus+and+meta+data+storage+sub+systems) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2918) GroupMembership module
Andrii Biletskyi created KAFKA-2918: --- Summary: GroupMembership module Key: KAFKA-2918 URL: https://issues.apache.org/jira/browse/KAFKA-2918 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Add GroupMembership interface and implement it on top of Zookeeper as a default version. Integrate implementation for: a) cluster (brokers) membership b) consumer group (High Level Consumer) membership -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2917) LeaderElection module
Andrii Biletskyi created KAFKA-2917: --- Summary: LeaderElection module Key: KAFKA-2917 URL: https://issues.apache.org/jira/browse/KAFKA-2917 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Add LeaderElection interface and turn current ZookeeperLeaderElector into a default implementation -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2919) ListenerRegistry
Andrii Biletskyi created KAFKA-2919: --- Summary: ListenerRegistry Key: KAFKA-2919 URL: https://issues.apache.org/jira/browse/KAFKA-2919 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Add ListenerRegistry interface and implement it on top of Zookeeper as a default version. Integrate it into Kafka instead of zookeeper data watchers. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2920) Storage module
Andrii Biletskyi created KAFKA-2920: --- Summary: Storage module Key: KAFKA-2920 URL: https://issues.apache.org/jira/browse/KAFKA-2920 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Add Storage module and implement it on top of Zookeeper as a default version. Replace ZkUtils calls with calls to Storage interface. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2921) Plug-able implementations support
Andrii Biletskyi created KAFKA-2921: --- Summary: Plug-able implementations support Key: KAFKA-2921 URL: https://issues.apache.org/jira/browse/KAFKA-2921 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Add infrastructure to support plug-able implementations in runtime. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2922) System and Replication tools verification
Andrii Biletskyi created KAFKA-2922: --- Summary: System and Replication tools verification Key: KAFKA-2922 URL: https://issues.apache.org/jira/browse/KAFKA-2922 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Go through System (https://cwiki.apache.org/confluence/display/KAFKA/System+Tools) and Replication tools (https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools) to verify (update/deprecate) which tools need to be abstracted from Zookeeper as the exclusive storage sub-system for Kafka. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1694) kafka command line and centralized operations
[ https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984933#comment-14984933 ] Andrii Biletskyi commented on KAFKA-1694: - [~granthenke], it was decided to split the work into 3 phases: api, admin client, cli. The Phase 1 was implemented under KAFKA-2229, the patch was moved to github (https://github.com/apache/kafka/pull/223). There were some minor comments under this pull request, they were fixed, though not rebased. IMO it lacks some deeper review and maybe testing. In short, you can pickup this ticket. I'm happy to help to close this issue asap too. If anything is needed (i.e. rebase) let me know. > kafka command line and centralized operations > - > > Key: KAFKA-1694 > URL: https://issues.apache.org/jira/browse/KAFKA-1694 > Project: Kafka > Issue Type: Bug >Reporter: Joe Stein > Assignee: Andrii Biletskyi >Priority: Critical > Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, > KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, > KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, > KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, > KAFKA-1772_1802_1775_1774_v2.patch > > > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing
[ https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14982679#comment-14982679 ] Andrii Biletskyi commented on KAFKA-2702: - It's been a while, but, yes, as far as I remember I added required field because not all configs had default values and we couldn't instantiate Config unless all settings have their value - the default one, or the one came from the user (config file). > ConfigDef toHtmlTable() sorts in a way that is a bit confusing > -- > > Key: KAFKA-2702 > URL: https://issues.apache.org/jira/browse/KAFKA-2702 > Project: Kafka > Issue Type: Bug >Reporter: Gwen Shapira >Assignee: Grant Henke > Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html > > > Because we put everything without default first (without prioritizing), > critical parameters get placed below low priority ones when they both have > no defaults. Some parameters are without default and optional (SASL server in > ConsumerConfig for instance). > Try printing ConsumerConfig parameters and see the mandatory group.id show up > as #15. > I suggest sorting the no-default parameters by priority as well, or perhaps > adding a "REQUIRED" category that gets printed first no matter what. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 34766: Patch for KAFKA-2229
> On Сер. 10, 2015, 4:56 після полудня, Grant Henke wrote: > > clients/src/main/java/org/apache/kafka/common/protocol/Errors.java, line 81 > > <https://reviews.apache.org/r/34766/diff/2/?file=995945#file995945line81> > > > > Should these new errors be added to kafka.common.ErrorMapping.scala? As I understand - no. There was a discussion in mailing list about moving from scala+java request objects to java-only classes (which server and client code should use). These suggestions were also applicable for all other objects used in client-server communication, like Errors which is a counterpart for ErrorMapping. > On Сер. 10, 2015, 4:56 після полудня, Grant Henke wrote: > > clients/src/main/java/org/apache/kafka/common/protocol/Errors.java, line 92 > > <https://reviews.apache.org/r/34766/diff/2/?file=995945#file995945line92> > > > > Looks like copy paste error in exception message. Thx, fixed in PR. > On Сер. 10, 2015, 4:56 після полудня, Grant Henke wrote: > > core/src/main/scala/kafka/common/InvalidReplicaAssignmentException.scala, > > line 20 > > <https://reviews.apache.org/r/34766/diff/2/?file=995958#file995958line20> > > > > Is there a time where we would want to pass through a throwable? (Same > > for the other Exceptions added) Not sure, this was just to follow a convention - other error classes have the same constructor methods. > On Сер. 10, 2015, 4:56 після полудня, Grant Henke wrote: > > core/src/main/scala/kafka/common/InvalidTopicConfigurationException.scala, > > line 22 > > <https://reviews.apache.org/r/34766/diff/2/?file=995960#file995960line22> > > > > Is this needed? Should we always pass some message? (Same for the other > > exceptions added? Same as above. - Andrii --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/#review94756 --- On Червень 30, 2015, 1:59 після полудня, Andrii Biletskyi wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/34766/ > --- > > (Updated Червень 30, 2015, 1:59 після полудня) > > > Review request for kafka. > > > Bugs: KAFKA-2229 > https://issues.apache.org/jira/browse/KAFKA-2229 > > > Repository: kafka > > > Description > --- > > KIP-4 Phase 1 Rebase > > > Diffs > - > > checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 > clients/src/main/java/org/apache/kafka/common/ConfigEntry.java PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/PartitionReplicaAssignment.java > PRE-CREATION > clients/src/main/java/org/apache/kafka/common/TopicConfigInfo.java > PRE-CREATION > clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java > b39e9bb53dd51f1ce931691e23aabc9ebaebf1e7 > clients/src/main/java/org/apache/kafka/common/protocol/Errors.java > 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 > clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java > 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f > clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java > 5d3d52859587ce0981d702f04042b0f6e1bc3704 > > clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java > PRE-CREATION > > clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java > 8b2aca85fa738180e5420985fddc39a4bf9681ea > core/src/main/scala/kafka/admin/AdminUtils.scala > f06edf41c732a7b794e496d0048b0ce6f897e72b > core/src/main/scala/kafka/api/RequestKeys.scala > 155cb650e9cffe2c950326cfc25b1480cda819db > core/src/main/scala/kafka/common/InvalidPartitionsException.scala > PRE-CREATION > core/src/main/scala/kafka/common/InvalidReplicaAssignmentException.scala > PRE-CREATION > core/src/main/scala/kafka/common/InvalidReplicationFactorException.scala > PRE-
[jira] [Commented] (KAFKA-2229) Phase 1: Requests and KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14742109#comment-14742109 ] Andrii Biletskyi commented on KAFKA-2229: - Sorry I was on vacation so didn't have time to work on this issue. I think I resolved all your comments now I need to rebase changes to the trunk and commit corresponding fixes per your comments. It's been a while since last rebase, but still I think it will be quicker if I do it. Your help will be very appreciated in reviewing again after rebase. As I can see there were some changes in replica assignment code which my changes use a lot, this part is pretty critical. Also you could provide more details (or even recommendations) to your comment "This logic is fairly complex and difficult to track. I am not sure the best way to resolve it. Breaking it into smaller functions may help." -> I've updated on it in RB. Thanks, Andrii > Phase 1: Requests and KafkaApis > --- > > Key: KAFKA-2229 > URL: https://issues.apache.org/jira/browse/KAFKA-2229 > Project: Kafka > Issue Type: Sub-task > Reporter: Andrii Biletskyi >Assignee: Andrii Biletskyi > Attachments: KAFKA-2229.patch, KAFKA-2229_2015-06-30_16:59:17.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 34766: Patch for KAFKA-2229
> On Сер. 14, 2015, 3:48 до полудня, Grant Henke wrote: > > core/src/main/scala/kafka/server/TopicCommandHelper.scala, lines 146-153 > > <https://reviews.apache.org/r/34766/diff/2/?file=995963#file995963line146> > > > > This logic is fairly complex and difficult to track. I am not sure the > > best way to resolve it. Breaking it into smaller functions may help. I agree it's hard to follow this logic, it's true the algorithm itself is complicated. Details and suggestions how exactly we can make this part clearer could be helpful here. - Andrii --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/#review95376 --- On Червень 30, 2015, 1:59 після полудня, Andrii Biletskyi wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/34766/ > --- > > (Updated Червень 30, 2015, 1:59 після полудня) > > > Review request for kafka. > > > Bugs: KAFKA-2229 > https://issues.apache.org/jira/browse/KAFKA-2229 > > > Repository: kafka > > > Description > --- > > KIP-4 Phase 1 Rebase > > > Diffs > - > > checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 > clients/src/main/java/org/apache/kafka/common/ConfigEntry.java PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/PartitionReplicaAssignment.java > PRE-CREATION > clients/src/main/java/org/apache/kafka/common/TopicConfigInfo.java > PRE-CREATION > clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java > b39e9bb53dd51f1ce931691e23aabc9ebaebf1e7 > clients/src/main/java/org/apache/kafka/common/protocol/Errors.java > 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 > clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java > 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f > clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java > 5d3d52859587ce0981d702f04042b0f6e1bc3704 > > clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java > PRE-CREATION > > clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java > PRE-CREATION > > clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java > 8b2aca85fa738180e5420985fddc39a4bf9681ea > core/src/main/scala/kafka/admin/AdminUtils.scala > f06edf41c732a7b794e496d0048b0ce6f897e72b > core/src/main/scala/kafka/api/RequestKeys.scala > 155cb650e9cffe2c950326cfc25b1480cda819db > core/src/main/scala/kafka/common/InvalidPartitionsException.scala > PRE-CREATION > core/src/main/scala/kafka/common/InvalidReplicaAssignmentException.scala > PRE-CREATION > core/src/main/scala/kafka/common/InvalidReplicationFactorException.scala > PRE-CREATION > core/src/main/scala/kafka/common/InvalidTopicConfigurationException.scala > PRE-CREATION > > core/src/main/scala/kafka/common/ReassignPartitionsInProgressException.scala > PRE-CREATION > core/src/main/scala/kafka/server/KafkaApis.scala > ad6f05807c61c971e5e60d24bc0c87e989115961 > core/src/main/scala/kafka/server/TopicCommandHelper.scala PRE-CREATION > core/src/test/scala/unit/kafka/admin/AdminTest.scala > 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 > core/src/test/scala/unit/kafka/server/TopicCommandHelperTest.scala > PRE-CREATION > > Diff: https://reviews.apache.org/r/34766/diff/ > > > Testing > --- > > > Thanks, > > Andrii Biletskyi > >
[jira] [Commented] (KAFKA-2229) Phase 1: Requests and KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14742108#comment-14742108 ] Andrii Biletskyi commented on KAFKA-2229: - Sorry I was on vacation so didn't have time to work on this issue. I think I resolved all your comments now I need to rebase changes to the trunk and commit corresponding fixes per your comments. It's been a while since last rebase, but still I think it will be quicker if I do it. Your help will be very appreciated in reviewing again after rebase. As I can see there were some changes in replica assignment code which my changes use a lot, this part is pretty critical. Also you could provide more details (or even recommendations) to your comment "This logic is fairly complex and difficult to track. I am not sure the best way to resolve it. Breaking it into smaller functions may help." -> I've updated on it in RB. Thanks, Andrii > Phase 1: Requests and KafkaApis > --- > > Key: KAFKA-2229 > URL: https://issues.apache.org/jira/browse/KAFKA-2229 > Project: Kafka > Issue Type: Sub-task > Reporter: Andrii Biletskyi >Assignee: Andrii Biletskyi > Attachments: KAFKA-2229.patch, KAFKA-2229_2015-06-30_16:59:17.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 34766: Patch for KAFKA-2229
On Aug. 10, 2015, 3:47 p.m., Grant Henke wrote: My apologies for not looking at this sooner, or suggesting this sooner. Given that this code change and scope is fairly large, would it be too much work to break out the patches reviews by each new protocol message? Then reviews can be more accessable, focus on one thing at a time, and the simple ones can get done much quicker. I am thinking starting with CreateTopic, then DeleteTopic, then AlterTopic may makes sense. If you disagree feel free to say so and I will review this patch as is. Hey Grant. Not sure this will work well. First of all this patch is already a part of one bigger feature KIP-4 (which we decided to split into patches :)). Secondly and most importantly, logic for handling all 3 requests relies on some base code. This means we will have to extract that common code (exceptions, some utils other stuff) first and probably submit it as a separate patch, which I'd prefer not to do. But I'm very open to suggestions if it speeds up the review process. - Andrii --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/#review94749 --- On June 30, 2015, 1:59 p.m., Andrii Biletskyi wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/ --- (Updated June 30, 2015, 1:59 p.m.) Review request for kafka. Bugs: KAFKA-2229 https://issues.apache.org/jira/browse/KAFKA-2229 Repository: kafka Description --- KIP-4 Phase 1 Rebase Diffs - checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 clients/src/main/java/org/apache/kafka/common/ConfigEntry.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/PartitionReplicaAssignment.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/TopicConfigInfo.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java b39e9bb53dd51f1ce931691e23aabc9ebaebf1e7 clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5d3d52859587ce0981d702f04042b0f6e1bc3704 clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java PRE-CREATION clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 8b2aca85fa738180e5420985fddc39a4bf9681ea core/src/main/scala/kafka/admin/AdminUtils.scala f06edf41c732a7b794e496d0048b0ce6f897e72b core/src/main/scala/kafka/api/RequestKeys.scala 155cb650e9cffe2c950326cfc25b1480cda819db core/src/main/scala/kafka/common/InvalidPartitionsException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicaAssignmentException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicationFactorException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidTopicConfigurationException.scala PRE-CREATION core/src/main/scala/kafka/common/ReassignPartitionsInProgressException.scala PRE-CREATION core/src/main/scala/kafka/server/KafkaApis.scala ad6f05807c61c971e5e60d24bc0c87e989115961 core/src/main/scala/kafka/server/TopicCommandHelper.scala PRE-CREATION core/src/test/scala/unit/kafka/admin/AdminTest.scala 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 core/src/test/scala/unit/kafka/server/TopicCommandHelperTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/34766/diff/ Testing --- Thanks, Andrii Biletskyi
Re: Review Request 34766: Patch for KAFKA-2229
On Aug. 10, 2015, 3:47 p.m., Grant Henke wrote: My apologies for not looking at this sooner, or suggesting this sooner. Given that this code change and scope is fairly large, would it be too much work to break out the patches reviews by each new protocol message? Then reviews can be more accessable, focus on one thing at a time, and the simple ones can get done much quicker. I am thinking starting with CreateTopic, then DeleteTopic, then AlterTopic may makes sense. If you disagree feel free to say so and I will review this patch as is. Andrii Biletskyi wrote: Hey Grant. Not sure this will work well. First of all this patch is already a part of one bigger feature KIP-4 (which we decided to split into patches :)). Secondly and most importantly, logic for handling all 3 requests relies on some base code. This means we will have to extract that common code (exceptions, some utils other stuff) first and probably submit it as a separate patch, which I'd prefer not to do. But I'm very open to suggestions if it speeds up the review process. Grant Henke wrote: Thanks Andrii, thats understandable. I will pull down and start reviewing. Since this patch is over a month old, it does not look like it applies cleanly to trunk. Can you please update it so it applies cleanly? Ismael Juma wrote: Andrii, you may also consider submitting a GitHub PR with the rebased branch, see https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes for the instructions. You can stick with Review Board if you prefer it though. Both approaches are still supported. @Grant, ok, I will rebase and submit pull request tomorrow. @Ismael, thanks for the suggestion, I will try this out. - Andrii --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/#review94749 --- On June 30, 2015, 1:59 p.m., Andrii Biletskyi wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/ --- (Updated June 30, 2015, 1:59 p.m.) Review request for kafka. Bugs: KAFKA-2229 https://issues.apache.org/jira/browse/KAFKA-2229 Repository: kafka Description --- KIP-4 Phase 1 Rebase Diffs - checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 clients/src/main/java/org/apache/kafka/common/ConfigEntry.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/PartitionReplicaAssignment.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/TopicConfigInfo.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java b39e9bb53dd51f1ce931691e23aabc9ebaebf1e7 clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5d3d52859587ce0981d702f04042b0f6e1bc3704 clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java PRE-CREATION clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 8b2aca85fa738180e5420985fddc39a4bf9681ea core/src/main/scala/kafka/admin/AdminUtils.scala f06edf41c732a7b794e496d0048b0ce6f897e72b core/src/main/scala/kafka/api/RequestKeys.scala 155cb650e9cffe2c950326cfc25b1480cda819db core/src/main/scala/kafka/common/InvalidPartitionsException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicaAssignmentException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicationFactorException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidTopicConfigurationException.scala PRE-CREATION core/src/main/scala/kafka/common/ReassignPartitionsInProgressException.scala PRE-CREATION core/src/main/scala/kafka/server/KafkaApis.scala ad6f05807c61c971e5e60d24bc0c87e989115961 core/src/main/scala/kafka/server/TopicCommandHelper.scala PRE-CREATION core/src/test/scala/unit/kafka/admin/AdminTest.scala
Re: Review Request 34766: Patch for KAFKA-2229
On Aug. 10, 2015, 5:01 p.m., Grant Henke wrote: core/src/main/scala/kafka/server/TopicCommandHelper.scala, line 35 https://reviews.apache.org/r/34766/diff/2/?file=995963#file995963line35 A lot of this code/functionality exists in kafka.admin.TopicCommand. Is this duplicating a lot of code/functionality? Could they both be refactored to use the same core code? IMO no, they cannot be refactored. The main problem is that code in kafka.admin.TopicCommand is very tightly coupled with CLI logic (like printing results, validation inputs), also error handling logic is very different. - Andrii --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/#review94759 --- On June 30, 2015, 1:59 p.m., Andrii Biletskyi wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/ --- (Updated June 30, 2015, 1:59 p.m.) Review request for kafka. Bugs: KAFKA-2229 https://issues.apache.org/jira/browse/KAFKA-2229 Repository: kafka Description --- KIP-4 Phase 1 Rebase Diffs - checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 clients/src/main/java/org/apache/kafka/common/ConfigEntry.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/PartitionReplicaAssignment.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/TopicConfigInfo.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java b39e9bb53dd51f1ce931691e23aabc9ebaebf1e7 clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5d3d52859587ce0981d702f04042b0f6e1bc3704 clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java PRE-CREATION clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 8b2aca85fa738180e5420985fddc39a4bf9681ea core/src/main/scala/kafka/admin/AdminUtils.scala f06edf41c732a7b794e496d0048b0ce6f897e72b core/src/main/scala/kafka/api/RequestKeys.scala 155cb650e9cffe2c950326cfc25b1480cda819db core/src/main/scala/kafka/common/InvalidPartitionsException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicaAssignmentException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicationFactorException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidTopicConfigurationException.scala PRE-CREATION core/src/main/scala/kafka/common/ReassignPartitionsInProgressException.scala PRE-CREATION core/src/main/scala/kafka/server/KafkaApis.scala ad6f05807c61c971e5e60d24bc0c87e989115961 core/src/main/scala/kafka/server/TopicCommandHelper.scala PRE-CREATION core/src/test/scala/unit/kafka/admin/AdminTest.scala 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 core/src/test/scala/unit/kafka/server/TopicCommandHelperTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/34766/diff/ Testing --- Thanks, Andrii Biletskyi
[jira] [Commented] (KAFKA-2310) Add config to prevent broker becoming controller
[ https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616401#comment-14616401 ] Andrii Biletskyi commented on KAFKA-2310: - [~jjkoshy] yes, it might be a duplicate. Although we didn't agree there what we want to do - force re-elect command or do it via config (as Joe mentioned in his comment). This patch is the implementation of the latter one. Not sure what should we do though - duplicate, link or maybe convert as sub-task of the KAFKA-1778. Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.1.patch, KAFKA-2310_0.8.2.patch The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2310) Add config to prevent broker becoming controller
Andrii Biletskyi created KAFKA-2310: --- Summary: Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2310) Add config to prevent broker becoming controller
[ https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2310: Attachment: (was: KAFKA-2310_0.8.2.patch) Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.1.patch, KAFKA-2310_0.8.2.patch The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2310) Add config to prevent broker becoming controller
[ https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2310: Attachment: KAFKA-2310_0.8.2.patch Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.1.patch, KAFKA-2310_0.8.2.patch The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2310) Add config to prevent broker becoming controller
[ https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2310: Attachment: KAFKA-2310_0.8.1.patch Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.1.patch, KAFKA-2310_0.8.2.patch The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2310) Add config to prevent broker becoming controller
[ https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2310: Attachment: KAFKA-2310_0.8.2.patch Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.2.patch The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2310) Add config to prevent broker becoming controller
[ https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2310: Attachment: KAFKA-2310.patch Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2310.patch The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2310) Add config to prevent broker becoming controller
[ https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2310: Status: Patch Available (was: Open) Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2310.patch The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2310) Add config to prevent broker becoming controller
[ https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14615111#comment-14615111 ] Andrii Biletskyi commented on KAFKA-2310: - Created reviewboard https://reviews.apache.org/r/36203/diff/ against branch origin/trunk Add config to prevent broker becoming controller Key: KAFKA-2310 URL: https://issues.apache.org/jira/browse/KAFKA-2310 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2310.patch The goal is to be able to specify which cluster brokers can serve as a controller and which cannot. This way it will be possible to reserve particular, not overloaded with partitions and other operations, broker as controller. Proposed to add config _controller.eligibility_ defaulted to true (for backward compatibility, since now any broker can become a controller) Patch will be available for trunk, 0.8.2 and 0.8.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Review Request 36203: Patch for KAFKA-2310
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/36203/ --- Review request for kafka. Bugs: KAFKA-2310 https://issues.apache.org/jira/browse/KAFKA-2310 Repository: kafka Description --- KAFKA-2310 - Add config to prevent broker becoming controller Diffs - core/src/main/scala/kafka/controller/KafkaController.scala 36350579b16027359d237b64699003358704ac6f core/src/main/scala/kafka/server/KafkaConfig.scala c1f0ccad4900d74e41936fae4c6c2235fe9314fe core/src/main/scala/kafka/server/NoOpLeaderElector.scala PRE-CREATION core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 98a5b042a710d3c1064b0379db1d152efc9eabee Diff: https://reviews.apache.org/r/36203/diff/ Testing --- Thanks, Andrii Biletskyi
Re: Review Request 34766: Patch for KAFKA-2229
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/ --- (Updated June 30, 2015, 1:59 p.m.) Review request for kafka. Bugs: KAFKA-2229 https://issues.apache.org/jira/browse/KAFKA-2229 Repository: kafka Description (updated) --- KIP-4 Phase 1 Rebase Diffs (updated) - checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 clients/src/main/java/org/apache/kafka/common/ConfigEntry.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/PartitionReplicaAssignment.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/TopicConfigInfo.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java b39e9bb53dd51f1ce931691e23aabc9ebaebf1e7 clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5d3d52859587ce0981d702f04042b0f6e1bc3704 clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java PRE-CREATION clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 8b2aca85fa738180e5420985fddc39a4bf9681ea core/src/main/scala/kafka/admin/AdminUtils.scala f06edf41c732a7b794e496d0048b0ce6f897e72b core/src/main/scala/kafka/api/RequestKeys.scala 155cb650e9cffe2c950326cfc25b1480cda819db core/src/main/scala/kafka/common/InvalidPartitionsException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicaAssignmentException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicationFactorException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidTopicConfigurationException.scala PRE-CREATION core/src/main/scala/kafka/common/ReassignPartitionsInProgressException.scala PRE-CREATION core/src/main/scala/kafka/server/KafkaApis.scala ad6f05807c61c971e5e60d24bc0c87e989115961 core/src/main/scala/kafka/server/TopicCommandHelper.scala PRE-CREATION core/src/test/scala/unit/kafka/admin/AdminTest.scala 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 core/src/test/scala/unit/kafka/server/TopicCommandHelperTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/34766/diff/ Testing --- Thanks, Andrii Biletskyi
[jira] [Updated] (KAFKA-2229) Phase 1: Requests and KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2229: Attachment: KAFKA-2229_2015-06-30_16:59:17.patch Phase 1: Requests and KafkaApis --- Key: KAFKA-2229 URL: https://issues.apache.org/jira/browse/KAFKA-2229 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2229.patch, KAFKA-2229_2015-06-30_16:59:17.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2229) Phase 1: Requests and KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608307#comment-14608307 ] Andrii Biletskyi commented on KAFKA-2229: - Updated reviewboard https://reviews.apache.org/r/34766/diff/ against branch origin/trunk Phase 1: Requests and KafkaApis --- Key: KAFKA-2229 URL: https://issues.apache.org/jira/browse/KAFKA-2229 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2229.patch, KAFKA-2229_2015-06-30_16:59:17.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2229) Phase 1: Requests and KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14608309#comment-14608309 ] Andrii Biletskyi commented on KAFKA-2229: - Rebased to include KAFKA-2195. Phase 1: Requests and KafkaApis --- Key: KAFKA-2229 URL: https://issues.apache.org/jira/browse/KAFKA-2229 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2229.patch, KAFKA-2229_2015-06-30_16:59:17.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14588607#comment-14588607 ] Andrii Biletskyi commented on KAFKA-2195: - Updated reviewboard https://reviews.apache.org/r/34415/diff/ against branch origin/trunk Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Jun Rao Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch, KAFKA-2195_2015-06-15_09:54:53.patch, KAFKA-2195_2015-06-16_22:28:19.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 34415: Patch for KAFKA-2195
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34415/ --- (Updated June 16, 2015, 7:28 p.m.) Review request for kafka. Bugs: KAFKA-2195 https://issues.apache.org/jira/browse/KAFKA-2195 Repository: kafka Description (updated) --- KAFKA-2195 - Code review KAFKA-2195 - Code review 2 KAFKA-2195 - Code review 3 (cosmetic fixes) Diffs (updated) - clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5e5308ec0e333179a9abbf4f3b750ea25ab57967 clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java 04b90bfe62456a6739fe0299f1564dbd1850fe58 clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 8686d83aa52e435c6adafbe9ff4bd1602281072a clients/src/main/java/org/apache/kafka/common/requests/HeartbeatRequest.java 51d081fa296fd7c170a90a634d432067afcfe772 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java 6795682258e6b329cc3caa245b950b4dbcf0cf45 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupResponse.java 8d418cd24cf6d105e9687a4a2492b8ed13738338 clients/src/main/java/org/apache/kafka/common/requests/ListOffsetRequest.java 19267ee8aad5a2f5a84cecd6fc563f00329d5035 clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java 7e0ce159a2ddd041fc06116038bd5831bbca278b clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 44e2ce61899889601b6aee71fa7f7ddb4a65a255 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 8bf6cbb79a92b0878096e099ec9169d21e6d7023 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java deec1fa480d5a5c5884a1c007b076aa64e902472 clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java fabeae3083a8ea55cdacbb9568f3847ccd85bab4 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java e3cc1967e407b64cc734548c19e30de700b64ba8 core/src/main/scala/kafka/network/RequestChannel.scala 357d8b97cb336857500213efade77950833c2096 core/src/main/scala/kafka/server/KafkaApis.scala d63bc18d795a6f0e6994538ca55a9a46f7fb8ffd Diff: https://reviews.apache.org/r/34415/diff/ Testing --- Thanks, Andrii Biletskyi
[jira] [Updated] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2195: Attachment: KAFKA-2195_2015-06-16_22:28:19.patch Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Jun Rao Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch, KAFKA-2195_2015-06-15_09:54:53.patch, KAFKA-2195_2015-06-16_22:28:19.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 34415: Patch for KAFKA-2195
On June 16, 2015, 3:52 p.m., Jun Rao wrote: clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java, lines 49-50 https://reviews.apache.org/r/34415/diff/3/?file=984580#file984580line49 Perhaps it's better to include the request name in the error message. Ditto for the rest of the reqeusts. Fixed. On June 16, 2015, 3:52 p.m., Jun Rao wrote: core/src/main/scala/kafka/network/RequestChannel.scala, line 69 https://reviews.apache.org/r/34415/diff/3/?file=984592#file984592line69 To be consistent, no need to include () in calling apiVersion. Ditte below. Done. - Andrii --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34415/#review88074 --- On June 16, 2015, 7:28 p.m., Andrii Biletskyi wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34415/ --- (Updated June 16, 2015, 7:28 p.m.) Review request for kafka. Bugs: KAFKA-2195 https://issues.apache.org/jira/browse/KAFKA-2195 Repository: kafka Description --- KAFKA-2195 - Code review KAFKA-2195 - Code review 2 KAFKA-2195 - Code review 3 (cosmetic fixes) Diffs - clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5e5308ec0e333179a9abbf4f3b750ea25ab57967 clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java 04b90bfe62456a6739fe0299f1564dbd1850fe58 clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 8686d83aa52e435c6adafbe9ff4bd1602281072a clients/src/main/java/org/apache/kafka/common/requests/HeartbeatRequest.java 51d081fa296fd7c170a90a634d432067afcfe772 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java 6795682258e6b329cc3caa245b950b4dbcf0cf45 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupResponse.java 8d418cd24cf6d105e9687a4a2492b8ed13738338 clients/src/main/java/org/apache/kafka/common/requests/ListOffsetRequest.java 19267ee8aad5a2f5a84cecd6fc563f00329d5035 clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java 7e0ce159a2ddd041fc06116038bd5831bbca278b clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 44e2ce61899889601b6aee71fa7f7ddb4a65a255 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 8bf6cbb79a92b0878096e099ec9169d21e6d7023 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java deec1fa480d5a5c5884a1c007b076aa64e902472 clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java fabeae3083a8ea55cdacbb9568f3847ccd85bab4 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java e3cc1967e407b64cc734548c19e30de700b64ba8 core/src/main/scala/kafka/network/RequestChannel.scala 357d8b97cb336857500213efade77950833c2096 core/src/main/scala/kafka/server/KafkaApis.scala d63bc18d795a6f0e6994538ca55a9a46f7fb8ffd Diff: https://reviews.apache.org/r/34415/diff/ Testing --- Thanks, Andrii Biletskyi
[jira] [Updated] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2195: Attachment: KAFKA-2195_2015-06-15_09:54:53.patch Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Jun Rao Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch, KAFKA-2195_2015-06-15_09:54:53.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 34415: Patch for KAFKA-2195
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34415/ --- (Updated June 15, 2015, 6:55 a.m.) Review request for kafka. Bugs: KAFKA-2195 https://issues.apache.org/jira/browse/KAFKA-2195 Repository: kafka Description (updated) --- KAFKA-2195 - Code review KAFKA-2195 - Code review 2 Diffs (updated) - clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5e5308ec0e333179a9abbf4f3b750ea25ab57967 clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java 04b90bfe62456a6739fe0299f1564dbd1850fe58 clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 8686d83aa52e435c6adafbe9ff4bd1602281072a clients/src/main/java/org/apache/kafka/common/requests/HeartbeatRequest.java 51d081fa296fd7c170a90a634d432067afcfe772 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java 6795682258e6b329cc3caa245b950b4dbcf0cf45 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupResponse.java 8d418cd24cf6d105e9687a4a2492b8ed13738338 clients/src/main/java/org/apache/kafka/common/requests/ListOffsetRequest.java 19267ee8aad5a2f5a84cecd6fc563f00329d5035 clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java 7e0ce159a2ddd041fc06116038bd5831bbca278b clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 44e2ce61899889601b6aee71fa7f7ddb4a65a255 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 8bf6cbb79a92b0878096e099ec9169d21e6d7023 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java deec1fa480d5a5c5884a1c007b076aa64e902472 clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java fabeae3083a8ea55cdacbb9568f3847ccd85bab4 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java e3cc1967e407b64cc734548c19e30de700b64ba8 core/src/main/scala/kafka/network/RequestChannel.scala 357d8b97cb336857500213efade77950833c2096 core/src/main/scala/kafka/server/KafkaApis.scala d63bc18d795a6f0e6994538ca55a9a46f7fb8ffd Diff: https://reviews.apache.org/r/34415/diff/ Testing --- Thanks, Andrii Biletskyi
[jira] [Commented] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585515#comment-14585515 ] Andrii Biletskyi commented on KAFKA-2195: - Updated reviewboard https://reviews.apache.org/r/34415/diff/ against branch origin/trunk Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Jun Rao Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch, KAFKA-2195_2015-06-15_09:54:53.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Joel, Aditya, I believe we don't need another thread to do voting since affected items are not related to the core proposed changes. I agree people can explicitly down-vote in case of concerns about the things that changed. Thanks, Andrii Biletskyi On Fri, Jun 12, 2015 at 8:24 PM, Aditya Auradkar aaurad...@linkedin.com.invalid wrote: Aside from the things I mentioned, I don't think there were other changes. I'll mark this as adopted since there don't appear to be any concerns. Aditya From: Joel Koshy [jjkosh...@gmail.com] Sent: Thursday, June 11, 2015 1:28 PM To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2) Discussion aside, was there any significant material change besides the additions below? If so, then we can avoid the overhead of another vote unless someone wants to down-vote these changes. Joel On Thu, Jun 11, 2015 at 06:36:36PM +, Aditya Auradkar wrote: Andrii, Do we need a new voting thread for this KIP? The last round of votes had 3 binding +1's but there's been a fair amount of discussion since then. Aditya From: Aditya Auradkar Sent: Thursday, June 11, 2015 10:32 AM To: dev@kafka.apache.org Subject: RE: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2) I've made two changes to the document: - Removed the TMR evolution piece since we agreed to retain this. - Added two new API's to the admin client spec. (Alter and Describe config). Please review. Aditya From: Ashish Singh [asi...@cloudera.com] Sent: Friday, May 29, 2015 8:36 AM To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2) +1 on discussing this on next KIP hangout. I will update KIP-24 before that. On Fri, May 29, 2015 at 3:40 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, I won't be able to attend next meeting. But in the latest patch for KIP-4 Phase 1 I didn't even evolve TopicMetadataRequest to v1 since we won't be able to change config with AlterTopicRequest, hence with this patch TMR will still return isr. Taking this into account I think yes - it would be good to fix ISR issue, although I didn't consider it to be a critical one (isr was part of TMR from the very beginning and almost no code relies on this piece of request). Thanks, Andrii Biletskyi On Fri, May 29, 2015 at 8:50 AM, Aditya Auradkar aaurad...@linkedin.com.invalid wrote: Thanks. Perhaps we should leave TMR unchanged for now. Should we discuss this during the next hangout? Aditya From: Jun Rao [j...@confluent.io] Sent: Thursday, May 28, 2015 5:32 PM To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2) There is a reasonable use case of ISR in KAFKA-2225. Basically, for economical reasons, we may want to let a consumer fetch from a replica in ISR that's in the same zone. In order to support that, it will be convenient to have TMR return the correct ISR for the consumer to choose. So, perhaps it's worth fixing the ISR inconsistency issue in KAFKA-1367 (there is some new discussion there on what it takes to fix this). If we do that, we can leave TMR unchanged. Thanks, Jun On Tue, May 26, 2015 at 1:13 PM, Aditya Auradkar aaurad...@linkedin.com.invalid wrote: Andryii, I made a few edits to this document as discussed in the KIP-21 thread. https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations With these changes. the only difference between TopicMetadataResponse_V1 and V0 is the removal of the ISR field. I've altered the KIP with the assumption that this is a good enough reason by itself to evolve the request/response protocol. Any concerns there? Thanks, Aditya From: Mayuresh Gharat [gharatmayures...@gmail.com] Sent: Thursday, May 21, 2015 8:29 PM To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2) Hi Jun, Thanks a lot. I get it now. Point 4) will actually enable clients to who don't want to create a topic with default partitions, if it does not exist and then can manually create the topic with their own configs(#partitions). Thanks, Mayuresh On Thu, May 21, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote: Mayuresh
[jira] [Commented] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14584234#comment-14584234 ] Andrii Biletskyi commented on KAFKA-2195: - Yes, thanks, I'll address it by Monday. Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Jun Rao Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Guys, I won't be able to attend next meeting. But in the latest patch for KIP-4 Phase 1 I didn't even evolve TopicMetadataRequest to v1 since we won't be able to change config with AlterTopicRequest, hence with this patch TMR will still return isr. Taking this into account I think yes - it would be good to fix ISR issue, although I didn't consider it to be a critical one (isr was part of TMR from the very beginning and almost no code relies on this piece of request). Thanks, Andrii Biletskyi On Fri, May 29, 2015 at 8:50 AM, Aditya Auradkar aaurad...@linkedin.com.invalid wrote: Thanks. Perhaps we should leave TMR unchanged for now. Should we discuss this during the next hangout? Aditya From: Jun Rao [j...@confluent.io] Sent: Thursday, May 28, 2015 5:32 PM To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2) There is a reasonable use case of ISR in KAFKA-2225. Basically, for economical reasons, we may want to let a consumer fetch from a replica in ISR that's in the same zone. In order to support that, it will be convenient to have TMR return the correct ISR for the consumer to choose. So, perhaps it's worth fixing the ISR inconsistency issue in KAFKA-1367 (there is some new discussion there on what it takes to fix this). If we do that, we can leave TMR unchanged. Thanks, Jun On Tue, May 26, 2015 at 1:13 PM, Aditya Auradkar aaurad...@linkedin.com.invalid wrote: Andryii, I made a few edits to this document as discussed in the KIP-21 thread. https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations With these changes. the only difference between TopicMetadataResponse_V1 and V0 is the removal of the ISR field. I've altered the KIP with the assumption that this is a good enough reason by itself to evolve the request/response protocol. Any concerns there? Thanks, Aditya From: Mayuresh Gharat [gharatmayures...@gmail.com] Sent: Thursday, May 21, 2015 8:29 PM To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2) Hi Jun, Thanks a lot. I get it now. Point 4) will actually enable clients to who don't want to create a topic with default partitions, if it does not exist and then can manually create the topic with their own configs(#partitions). Thanks, Mayuresh On Thu, May 21, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote: Mayuresh, The current plan is the following. 1. Add TMR v1, which still triggers auto topic creation. 2. Change the consumer client to TMR v1. Change the producer client to use TMR v1 and on UnknownTopicException, issue TopicCreateRequest to explicitly create the topic with the default server side partitions and replicas. 3. At some later time after the new clients are released and deployed, disable auto topic creation in TMR v1. This will make sure consumers never create new topics. 4. If needed, we can add a new config in the producer to control whether TopicCreateRequest should be issued or not on UnknownTopicException. If this is disabled and the topic doesn't exist, send will fail and the user is expected to create the topic manually. Thanks, Jun On Thu, May 21, 2015 at 5:27 PM, Mayuresh Gharat gharatmayures...@gmail.com wrote: Hi, I had a question about TopicMetadata Request. Currently the way it works is : 1) Suppose a topic T1 does not exist. 2) Client wants to produce data to T1 using producer P1. 3) Since T1 does not exist, P1 issues a TopicMetadata request to kafka. This in turn creates the default number of partition. The number of partitions is a cluster wide config. 4) Same goes for a consumer. If the topic does not exist and new topic will be created when the consumer issues TopicMetadata request. Here are 2 use cases where it might not be suited : The auto creation flag for topics is turned ON. a) Some clients might not want to create topic with default number of partitions but with lower number of partitions. Currently in a multi-tenant environment this is not possible without changing the cluster wide default config. b) Some clients might want to just check if the topic exist or not but currently the topic gets created automatically using default number of partitions. Here are some ideas to address this : 1) The way this can be addressed is that TopicMetadata request should have a way to specify whether it should only check if the topic exist or check and create a topic with given number of partitions. If the number of partitions is not specified use the default cluster wide config
[jira] [Resolved] (KAFKA-2228) Delete me
[ https://issues.apache.org/jira/browse/KAFKA-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi resolved KAFKA-2228. - Resolution: Duplicate Delete me - Key: KAFKA-2228 URL: https://issues.apache.org/jira/browse/KAFKA-2228 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Fix For: 0.8.3 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (KAFKA-2227) Delete me
[ https://issues.apache.org/jira/browse/KAFKA-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi resolved KAFKA-2227. - Resolution: Duplicate Delete me - Key: KAFKA-2227 URL: https://issues.apache.org/jira/browse/KAFKA-2227 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Fix For: 0.8.3 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2227) Delete me
[ https://issues.apache.org/jira/browse/KAFKA-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2227: Summary: Delete me (was: Phase 1: Requests and KafkaApis) Delete me - Key: KAFKA-2227 URL: https://issues.apache.org/jira/browse/KAFKA-2227 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Fix For: 0.8.3 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-2228) Delete me
[ https://issues.apache.org/jira/browse/KAFKA-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi reassigned KAFKA-2228: --- Assignee: (was: Andrii Biletskyi) Delete me - Key: KAFKA-2228 URL: https://issues.apache.org/jira/browse/KAFKA-2228 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Fix For: 0.8.3 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2228) Delete me
[ https://issues.apache.org/jira/browse/KAFKA-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2228: Summary: Delete me (was: Phase 1: Requests and KafkaApis) Delete me - Key: KAFKA-2228 URL: https://issues.apache.org/jira/browse/KAFKA-2228 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2229) Phase 1: Requests and KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2229: Status: Patch Available (was: Open) Phase 1: Requests and KafkaApis --- Key: KAFKA-2229 URL: https://issues.apache.org/jira/browse/KAFKA-2229 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2229.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2228) Phase 1: Requests and KafkaApis
Andrii Biletskyi created KAFKA-2228: --- Summary: Phase 1: Requests and KafkaApis Key: KAFKA-2228 URL: https://issues.apache.org/jira/browse/KAFKA-2228 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-2229) Phase 1: Requests and KafkaApis
Andrii Biletskyi created KAFKA-2229: --- Summary: Phase 1: Requests and KafkaApis Key: KAFKA-2229 URL: https://issues.apache.org/jira/browse/KAFKA-2229 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2229) Phase 1: Requests and KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2229: Attachment: KAFKA-2229.patch Phase 1: Requests and KafkaApis --- Key: KAFKA-2229 URL: https://issues.apache.org/jira/browse/KAFKA-2229 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2229.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Review Request 34766: Patch for KAFKA-2229
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34766/ --- Review request for kafka. Bugs: KAFKA-2229 https://issues.apache.org/jira/browse/KAFKA-2229 Repository: kafka Description --- KIP-4 Admin tools - Phase 1 Diffs - checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 clients/src/main/java/org/apache/kafka/common/ConfigEntry.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/PartitionReplicaAssignment.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/TopicConfigInfo.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java b39e9bb53dd51f1ce931691e23aabc9ebaebf1e7 clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5e5308ec0e333179a9abbf4f3b750ea25ab57967 clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java PRE-CREATION clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java PRE-CREATION clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java e3cc1967e407b64cc734548c19e30de700b64ba8 core/src/main/scala/kafka/admin/AdminUtils.scala f06edf41c732a7b794e496d0048b0ce6f897e72b core/src/main/scala/kafka/api/RequestKeys.scala ef7a86ec3324028496d6bb7c7c6fec7d7d19d64e core/src/main/scala/kafka/common/InvalidPartitionsException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicaAssignmentException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidReplicationFactorException.scala PRE-CREATION core/src/main/scala/kafka/common/InvalidTopicConfigurationException.scala PRE-CREATION core/src/main/scala/kafka/common/ReassignPartitionsInProgressException.scala PRE-CREATION core/src/main/scala/kafka/server/KafkaApis.scala 387e387998fc3a6c9cb585dab02b5f77b0381fbf core/src/main/scala/kafka/server/TopicCommandHelper.scala PRE-CREATION core/src/test/scala/unit/kafka/admin/AdminTest.scala efb2f8e79b3faef78722774b951fea828cd50374 core/src/test/scala/unit/kafka/server/TopicCommandHelperTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/34766/diff/ Testing --- Thanks, Andrii Biletskyi
[jira] [Commented] (KAFKA-2229) Phase 1: Requests and KafkaApis
[ https://issues.apache.org/jira/browse/KAFKA-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562877#comment-14562877 ] Andrii Biletskyi commented on KAFKA-2229: - Created reviewboard https://reviews.apache.org/r/34766/diff/ against branch origin/trunk Phase 1: Requests and KafkaApis --- Key: KAFKA-2229 URL: https://issues.apache.org/jira/browse/KAFKA-2229 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2229.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2195: Status: Patch Available (was: In Progress) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557829#comment-14557829 ] Andrii Biletskyi commented on KAFKA-2195: - Updated reviewboard https://reviews.apache.org/r/34415/diff/ against branch origin/trunk Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2195: Attachment: KAFKA-2195_2015-05-24_22:48:55.patch Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi reassigned KAFKA-2195: --- Assignee: Jun Rao (was: Andrii Biletskyi) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Jun Rao Fix For: 0.8.3 Attachments: KAFKA-2195.patch, KAFKA-2195_2015-05-24_22:48:55.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 34415: Patch for KAFKA-2195
On May 22, 2015, 4:37 p.m., Jun Rao wrote: clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java, lines 45-62 https://reviews.apache.org/r/34415/diff/1/?file=963952#file963952line45 Could we change all requests to use parse(buffer, versionId)? Agree. On May 22, 2015, 4:37 p.m., Jun Rao wrote: clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java, lines 35-38 https://reviews.apache.org/r/34415/diff/1/?file=963952#file963952line35 Could we just remove this api and use the new one with versionId? Agree, makes sense for me. On May 22, 2015, 4:37 p.m., Jun Rao wrote: clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java, line 49 https://reviews.apache.org/r/34415/diff/1/?file=963953#file963953line49 It would be better if we can make this a complete sentence. Sth like Version x for ConsumerMetadataResponse is not valid. Valid versions are 0 to 1. Okay, will fix the error message. On May 22, 2015, 4:37 p.m., Jun Rao wrote: clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java, lines 70-76 https://reviews.apache.org/r/34415/diff/1/?file=963960#file963960line70 For responses, we actually don't have to maintain the compatibility of the all constructors. The reason is that a client always constructs a response from a struct. We can freely change the signature of other constructors since only the server will use them. Okay, I'll remove deprecation notice. - Andrii --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34415/#review84850 --- On May 19, 2015, 4:09 p.m., Andrii Biletskyi wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34415/ --- (Updated May 19, 2015, 4:09 p.m.) Review request for kafka. Bugs: KAFKA-2195 https://issues.apache.org/jira/browse/KAFKA-2195 Repository: kafka Description --- KAFKA-2195 - Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Diffs - clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5e5308ec0e333179a9abbf4f3b750ea25ab57967 clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java 04b90bfe62456a6739fe0299f1564dbd1850fe58 clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 8686d83aa52e435c6adafbe9ff4bd1602281072a clients/src/main/java/org/apache/kafka/common/requests/HeartbeatRequest.java 51d081fa296fd7c170a90a634d432067afcfe772 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java 6795682258e6b329cc3caa245b950b4dbcf0cf45 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupResponse.java fd9c545c99058ad3fbe3b2c55ea8b6ea002f5a51 clients/src/main/java/org/apache/kafka/common/requests/ListOffsetRequest.java 19267ee8aad5a2f5a84cecd6fc563f00329d5035 clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java 7e0ce159a2ddd041fc06116038bd5831bbca278b clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 44e2ce61899889601b6aee71fa7f7ddb4a65a255 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 8bf6cbb79a92b0878096e099ec9169d21e6d7023 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java deec1fa480d5a5c5884a1c007b076aa64e902472 clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java fabeae3083a8ea55cdacbb9568f3847ccd85bab4 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java e3cc1967e407b64cc734548c19e30de700b64ba8 core/src/main/scala/kafka/network/RequestChannel.scala 1d0024c8f0c2ab0efa6d8cfca6455877a6ed8026 core/src/main/scala/kafka/server/KafkaApis.scala 387e387998fc3a6c9cb585dab02b5f77b0381fbf Diff: https://reviews.apache.org/r/34415/diff/ Testing --- Thanks, Andrii Biletskyi
[jira] [Commented] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14550682#comment-14550682 ] Andrii Biletskyi commented on KAFKA-2195: - Created reviewboard https://reviews.apache.org/r/34415/diff/ against branch origin/trunk Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2195.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
[ https://issues.apache.org/jira/browse/KAFKA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2195: Status: Patch Available (was: Open) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Fix For: 0.8.3 Attachments: KAFKA-2195.patch This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [VOTE] KIP-21 Dynamic Configuration
Hi, Sorry I wasn't able to participate. I don't have objections about removing config changes from AlterTopic (as I understand both AddedConfig and DeletedConfig) - you are welcome to update the KIP page. Thanks, Andrii Biletskyi On Tue, May 19, 2015 at 11:40 PM, Aditya Auradkar aaurad...@linkedin.com.invalid wrote: Updating the discussion with the latest comments. 1. We discussed adding 2 new API's (AlterConfig and DescribeConfig). I'll update KIP-21 with details on these. 2. Discussed during the KIP hangout. We are in agreement. (1) has a dependency on KIP-4 being completed. Rest of the work in the KIP can be implemented independently. Any concerns if we tackle it as two separate work items implementation wise? We also discussed changing the AlterTopic command in KIP-4 to not include config changes. Instead, all config changes will pass through the newly proposed AlterConfig. If no-one objects, I can make some changes to KIP-4 to reflect this. Aditya From: Jay Kreps [jay.kr...@gmail.com] Sent: Tuesday, May 19, 2015 10:51 AM To: dev@kafka.apache.org Subject: Re: [VOTE] KIP-21 Dynamic Configuration Hey Aditya, Two comments: 1. Yeah we need to reconcile this with the APIs in KIP-4. I think it does make sense to allow setting config during topic creation. I agree with your summary that having alter topic and alter config may be confusing, but there are also some non-config changes such as replication factor and partition count that alter topic can carry out. What is the final state you are proposing? 2. This is implementation related so probably can be removed from the KIP entirely, but you seem to be proposing a separate config manager for each config override type. Should we just generalize TopicConfigManager to be ConfigOverrideManager and have it handle all the override types we will have? I think I may just be unclear on what you are proposing... -Jay On Mon, May 18, 2015 at 1:34 PM, Aditya Auradkar aaurad...@linkedin.com.invalid wrote: Yeah, that was just a typo. I've fixed it. Thanks for calling it out. In KIP-4, I believe we have 3 types of requests: CreateTopic, AlterTopic and DeleteTopic. The topic configs are a sub-type of the Create and Alter commands. I think it would be nice to simply have a AlterConfig command that can alter any type of config rather than having a specific ClientConfig. AlterConfig = [ConfigType [AddedConfigEntry] [DeletedConfig]] ConfigType = string AddedConfigEntry = ConfigKey ConfigValue ConfigKey = string ConfigValue = string DeletedConfig = string The downside of this approach is that we will have 2 separate ways of changing topic configs (AlterTopic and AlterConfig). While a general AlterConfig only makes sense if we plan to have more than two types of entity configs.. it's definitely more future proof. Thoughts? Aditya From: Todd Palino [tpal...@gmail.com] Sent: Monday, May 18, 2015 12:39 PM To: dev@kafka.apache.org Subject: Re: [VOTE] KIP-21 Dynamic Configuration Agree with Jun here on the JSON format. I think your intention was likely to have actual JSON here and it was just a typo in the wiki? -Todd On Mon, May 18, 2015 at 12:07 PM, Jun Rao j...@confluent.io wrote: Aditya, Another thing to consider. In KIP-4, we are adding a new RPC request to change and retrieve topic configs. Do we want to add a similar RPC request to change configs per client id? If so, do we want to introduce a separate new request or have a combined new request for both topic and client id level config changes? A minor point in the wiki, for the json format in ZK, we should change {X1=Y1, X2=Y2..} to a json map, right? Thanks, Jun On Mon, May 18, 2015 at 9:48 AM, Aditya Auradkar aaurad...@linkedin.com.invalid wrote: https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration Aditya
Re: [Discussion] Using Client Requests and Responses in Server
Guys, I've just uploaded the patch which aligns versionId in getErrorResponse. (https://issues.apache.org/jira/browse/KAFKA-2195) Would be great if someone could review it because it's a blocker for other requests including MetadataRequest which I have to update to V1 for KIP-4. Thanks, Andrii Biletskyi On Tue, May 19, 2015 at 12:23 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: No worries, I think it's hard to foresee all nuances before actually implementing/writing code. I also have a feeling there will be some other issues with requests versioning so I plan to finish end-to-end MetadataRequest_V1 to ensure we don't miss anything in terms of AbstractRequest/Response API, before uploading the respective patch. Thanks, Andrii Biletskyi On Tue, May 19, 2015 at 12:14 AM, Gwen Shapira gshap...@cloudera.com wrote: Sorry for the confusion regarding errorResponses... On Mon, May 18, 2015 at 9:10 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Jun, Thanks for the explanation. I believe my understanding is close to what you have written. I see, I still think that this approach is somewhat limiting (what if you add field of type int in V1 and then remove another field of type int in V2 - method overloading for V0 and V2 constructors will not compile) but in any case we need to follow this approach. Ok, then I believe I will have to remove all error-constructors which were added as part of this sub-task. Instead in getErrorResponse(versionId, throwable) I will pattern-match on version and get the right response version by calling the constructor with the right arguments. Also one small issue with this approach. Currently we create MetadataRequest from a Cluster object. As you remember in KIP-4 we planned to evolve it to include topic-level configs. We agreed to add this to Cluster class directly. In this case it will break our pattern - constructor per version, since the constructor won't be changed (simply accepting cluster instance in both cases). What is the preferable solution in this case? I can explicitly add topicConfigs param to the signature of the V1 constructor but it seems inconsistent because Cluster would already encapsulate topicConfigs at that point. Thanks, Andrii Biletskyi On Mon, May 18, 2015 at 8:28 PM, Jun Rao j...@confluent.io wrote: Andri, Let me clarify a bit how things work now. You can see if this fits your need or if it can be improved. If you look at OffsetCommitRequest, our convention is the following. 1. The request object can be constructed from a set of required fields. The client typically constructs a request object this way. There will be one constructor for each version. The version id is not specified explicitly since it's implied by the input parameters. Every time we introduce a new version, we will add a new constructor of this form. We will leave the old constructors as they are, but mark them as deprecated. Code compiled with the old Kafka jar will still work with the new Kafka jar before we actually remove the deprecated constructors. 2. The request object can also be constructed from a struct. This is typically used by the broker to convert network bytes into a request object. Currently, the constructor looks for specific fields in the struct to distinguish which version it corresponds to. 3. In both cases, the request object always tries to reflect the fields in the latest version. We use the following convention when mapping older versions to the latest version in the request object: If a new field is added, we try to use a default for the missing field in the old version. If a field is removed, we simply ignore it in the old version. Thanks, Jun On Mon, May 18, 2015 at 8:41 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Hi all, I started working on it and it seems we are going the wrong way. So it appears we need to distinguish constructors by versions in request/response (so we can set correct schema). Request/Response classes will look like: class SomeRequest extends AbstractRequest { SomeRequest(versionId, request-specific params ) // for the latest version SomeRequest(request-specific params) } Now, what if in SomeRequest_V1 we remove some field from the schema? Well, we can leave constructor signature and simply check programmatically if set schema contains given field and if no simply ignore it. Thus mentioned constructor can support V0 V1. Now, suppose in V2 we add some field - there's nothing we can do, we need to add new parameter and thus add new constructor: SomeRequest(versionId, request-specific params for V2) but it's a bit strange - to introduce constructors which may fail in runtime-only because
Review Request 34415: Patch for KAFKA-2195
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34415/ --- Review request for kafka. Bugs: KAFKA-2195 https://issues.apache.org/jira/browse/KAFKA-2195 Repository: kafka Description --- KAFKA-2195 - Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Diffs - clients/src/main/java/org/apache/kafka/common/requests/AbstractRequest.java 5e5308ec0e333179a9abbf4f3b750ea25ab57967 clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java 04b90bfe62456a6739fe0299f1564dbd1850fe58 clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 8686d83aa52e435c6adafbe9ff4bd1602281072a clients/src/main/java/org/apache/kafka/common/requests/HeartbeatRequest.java 51d081fa296fd7c170a90a634d432067afcfe772 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java 6795682258e6b329cc3caa245b950b4dbcf0cf45 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupResponse.java fd9c545c99058ad3fbe3b2c55ea8b6ea002f5a51 clients/src/main/java/org/apache/kafka/common/requests/ListOffsetRequest.java 19267ee8aad5a2f5a84cecd6fc563f00329d5035 clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java 7e0ce159a2ddd041fc06116038bd5831bbca278b clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 44e2ce61899889601b6aee71fa7f7ddb4a65a255 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 8bf6cbb79a92b0878096e099ec9169d21e6d7023 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java deec1fa480d5a5c5884a1c007b076aa64e902472 clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java fabeae3083a8ea55cdacbb9568f3847ccd85bab4 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java e3cc1967e407b64cc734548c19e30de700b64ba8 core/src/main/scala/kafka/network/RequestChannel.scala 1d0024c8f0c2ab0efa6d8cfca6455877a6ed8026 core/src/main/scala/kafka/server/KafkaApis.scala 387e387998fc3a6c9cb585dab02b5f77b0381fbf Diff: https://reviews.apache.org/r/34415/diff/ Testing --- Thanks, Andrii Biletskyi
Re: [Discussion] Using Client Requests and Responses in Server
Hi all, I started working on it and it seems we are going the wrong way. So it appears we need to distinguish constructors by versions in request/response (so we can set correct schema). Request/Response classes will look like: class SomeRequest extends AbstractRequest { SomeRequest(versionId, request-specific params ) // for the latest version SomeRequest(request-specific params) } Now, what if in SomeRequest_V1 we remove some field from the schema? Well, we can leave constructor signature and simply check programmatically if set schema contains given field and if no simply ignore it. Thus mentioned constructor can support V0 V1. Now, suppose in V2 we add some field - there's nothing we can do, we need to add new parameter and thus add new constructor: SomeRequest(versionId, request-specific params for V2) but it's a bit strange - to introduce constructors which may fail in runtime-only because you used the wrong constructor for your request version. Overall in my opinion such approach depicts we are trying to give clients factory-like methods but implemented as class constructors... Another thing is about versionId-less constructor (used for the latest version). Again, suppose in V1 we extend schema with additional value, we will have to change constructor without versionId, because this becomes the latest version. But would it be considered backward-compatible? Client code that uses V0 and upgrades will not compile in this case. Thoughts? Thanks, Andrii Biletskyi On Fri, May 15, 2015 at 4:31 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Okay, I can pick that. I'll create sub-task under KAFKA-2044. Thanks, Andrii Biletskyi On Fri, May 15, 2015 at 4:27 PM, Gwen Shapira gshap...@cloudera.com wrote: Agree that you need version in getErrorResponse too (so you'll get the correct error), which means you'll need to add versionId to constructors of every response object... You'll want to keep two interfaces, one with version and one with CURR_VERSION as default, so you won't need to modify every single call... On Fri, May 15, 2015 at 4:03 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Correct, I think we are on the same page. This way we can fix RequestChannel part (where it uses AbstractRequest.getRequest) But would it be okay to add versionId to AbstractRequest.getErrorResponse signature too? I'm a bit lost with all those Abstract... objects hierarchy and not sure whether it's the right solution. Thanks, Andrii Biletskyi On Fri, May 15, 2015 at 3:47 PM, Gwen Shapira gshap...@cloudera.com wrote: I agree, we currently don't handle versions correctly when de-serializing into java objects. This will be an isssue for every req/resp we move to use the java objects. It looks like this requires: 1. Add versionId parameter to all parse functions in Java req/resp objects 2. Modify getRequest to pass it along 3. Modify RequestChannel to get the version out of the header and use it when de-serializing the body. Did I get that correct? I want to make sure we are talking about the same issue. Gwen On Fri, May 15, 2015 at 1:45 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Gwen, I didn't find this in answers above so apologies if this was discussed. It's about the way we would like to handle request versions. As I understood from Jun's answer we generally should try using the same java object while evolving the request. I believe the only example of evolved request now - OffsetCommitRequest follows this approach. I'm trying to evolve MetadataRequest to the next version as part of KIP-4 and not sure current AbstractRequest api (which is a basis for ported to java requests) is sufficient. The problem is: in order to deserialize bytes into correct correct object you need to know it's version. Suppose KafkaApi serves OffsetCommitRequestV0 and V2 (current). For such cases OffsetCommitRequest class has two constructors: public static OffsetCommitRequest parse(ByteBuffer buffer, int versionId) AND public static OffsetCommitRequest parse(ByteBuffer buffer) The latter one will simply pick the current schema version. Now AbstractRequest.getRequest which is an entry point for deserializing request for KafkaApi matches only on RequestHeader.apiKey (and thus uses the second OffsetCommitRequest constructor) which is not sufficient because we also need RequestHeader.apiVersion in case old request version. The same problem appears in AbstractRequest.getErrorResponse(Throwable e) - to construct the right error response object we need to know to which apiVersion to respond. I think this can affect other tasks under KAFKA-1927 - replacing separate RQ/RP, so maybe it makes sense
Re: [Discussion] Using Client Requests and Responses in Server
No worries, I think it's hard to foresee all nuances before actually implementing/writing code. I also have a feeling there will be some other issues with requests versioning so I plan to finish end-to-end MetadataRequest_V1 to ensure we don't miss anything in terms of AbstractRequest/Response API, before uploading the respective patch. Thanks, Andrii Biletskyi On Tue, May 19, 2015 at 12:14 AM, Gwen Shapira gshap...@cloudera.com wrote: Sorry for the confusion regarding errorResponses... On Mon, May 18, 2015 at 9:10 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Jun, Thanks for the explanation. I believe my understanding is close to what you have written. I see, I still think that this approach is somewhat limiting (what if you add field of type int in V1 and then remove another field of type int in V2 - method overloading for V0 and V2 constructors will not compile) but in any case we need to follow this approach. Ok, then I believe I will have to remove all error-constructors which were added as part of this sub-task. Instead in getErrorResponse(versionId, throwable) I will pattern-match on version and get the right response version by calling the constructor with the right arguments. Also one small issue with this approach. Currently we create MetadataRequest from a Cluster object. As you remember in KIP-4 we planned to evolve it to include topic-level configs. We agreed to add this to Cluster class directly. In this case it will break our pattern - constructor per version, since the constructor won't be changed (simply accepting cluster instance in both cases). What is the preferable solution in this case? I can explicitly add topicConfigs param to the signature of the V1 constructor but it seems inconsistent because Cluster would already encapsulate topicConfigs at that point. Thanks, Andrii Biletskyi On Mon, May 18, 2015 at 8:28 PM, Jun Rao j...@confluent.io wrote: Andri, Let me clarify a bit how things work now. You can see if this fits your need or if it can be improved. If you look at OffsetCommitRequest, our convention is the following. 1. The request object can be constructed from a set of required fields. The client typically constructs a request object this way. There will be one constructor for each version. The version id is not specified explicitly since it's implied by the input parameters. Every time we introduce a new version, we will add a new constructor of this form. We will leave the old constructors as they are, but mark them as deprecated. Code compiled with the old Kafka jar will still work with the new Kafka jar before we actually remove the deprecated constructors. 2. The request object can also be constructed from a struct. This is typically used by the broker to convert network bytes into a request object. Currently, the constructor looks for specific fields in the struct to distinguish which version it corresponds to. 3. In both cases, the request object always tries to reflect the fields in the latest version. We use the following convention when mapping older versions to the latest version in the request object: If a new field is added, we try to use a default for the missing field in the old version. If a field is removed, we simply ignore it in the old version. Thanks, Jun On Mon, May 18, 2015 at 8:41 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Hi all, I started working on it and it seems we are going the wrong way. So it appears we need to distinguish constructors by versions in request/response (so we can set correct schema). Request/Response classes will look like: class SomeRequest extends AbstractRequest { SomeRequest(versionId, request-specific params ) // for the latest version SomeRequest(request-specific params) } Now, what if in SomeRequest_V1 we remove some field from the schema? Well, we can leave constructor signature and simply check programmatically if set schema contains given field and if no simply ignore it. Thus mentioned constructor can support V0 V1. Now, suppose in V2 we add some field - there's nothing we can do, we need to add new parameter and thus add new constructor: SomeRequest(versionId, request-specific params for V2) but it's a bit strange - to introduce constructors which may fail in runtime-only because you used the wrong constructor for your request version. Overall in my opinion such approach depicts we are trying to give clients factory-like methods but implemented as class constructors... Another thing is about versionId-less constructor (used for the latest version). Again, suppose in V1 we extend schema with additional value, we will have to change constructor
Re: [Discussion] Using Client Requests and Responses in Server
Gwen, I didn't find this in answers above so apologies if this was discussed. It's about the way we would like to handle request versions. As I understood from Jun's answer we generally should try using the same java object while evolving the request. I believe the only example of evolved request now - OffsetCommitRequest follows this approach. I'm trying to evolve MetadataRequest to the next version as part of KIP-4 and not sure current AbstractRequest api (which is a basis for ported to java requests) is sufficient. The problem is: in order to deserialize bytes into correct correct object you need to know it's version. Suppose KafkaApi serves OffsetCommitRequestV0 and V2 (current). For such cases OffsetCommitRequest class has two constructors: public static OffsetCommitRequest parse(ByteBuffer buffer, int versionId) AND public static OffsetCommitRequest parse(ByteBuffer buffer) The latter one will simply pick the current schema version. Now AbstractRequest.getRequest which is an entry point for deserializing request for KafkaApi matches only on RequestHeader.apiKey (and thus uses the second OffsetCommitRequest constructor) which is not sufficient because we also need RequestHeader.apiVersion in case old request version. The same problem appears in AbstractRequest.getErrorResponse(Throwable e) - to construct the right error response object we need to know to which apiVersion to respond. I think this can affect other tasks under KAFKA-1927 - replacing separate RQ/RP, so maybe it makes sense to decide/fix it once. Thanks, Andrii Bieltskyi On Wed, Mar 25, 2015 at 12:42 AM, Gwen Shapira gshap...@cloudera.com wrote: OK, I posted a working patch on KAFKA-2044 and https://reviews.apache.org/r/32459/diff/. There are few decisions there than can be up to discussion (factory method on AbstractRequestResponse, the new handleErrors in request API), but as far as support for o.a.k.common requests in core goes, it does what it needs to do. Please review! Gwen On Tue, Mar 24, 2015 at 10:59 AM, Gwen Shapira gshap...@cloudera.com wrote: Hi, I uploaded a (very) preliminary patch with my idea. One thing thats missing: RequestResponse had handleError method that all requests implemented, typically generating appropriate error Response for the request and sending it along. Its used by KafkaApis to handle all protocol errors for valid requests that are not handled elsewhere. AbstractRequestResponse doesn't have such method. I can, of course, add it. But before I jump into this, I'm wondering if there was another plan on handling Api errors. Gwen On Mon, Mar 23, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote: I think what you are saying is that in RequestChannel, we can start generating header/body for new request types and leave requestObj null. For existing requests, header/body will be null initially. Gradually, we can migrate each type of requests by populating header/body, instead of requestObj. This makes sense to me since it serves two purposes (1) not polluting the code base with duplicated request/response objects for new types of requests and (2) allowing the refactoring of existing requests to be done in smaller pieces. Could you try that approach and perhaps just migrate one existing request type (e.g. HeartBeatRequest) as an example? We probably need to rewind the buffer after reading the requestId when deserializing the header (since the header includes the request id). Thanks, Jun On Mon, Mar 23, 2015 at 4:52 PM, Gwen Shapira gshap...@cloudera.com wrote: I'm thinking of a different approach, that will not fix everything, but will allow adding new requests without code duplication (and therefore unblock KIP-4): RequestChannel.request currently takes a buffer and parses it into an old request object. Since the objects are byte-compatibly, we should be able to parse existing requests into both old and new objects. New requests will only be parsed into new objects. Basically: val requestId = buffer.getShort() if (requestId in keyToNameAndDeserializerMap) { requestObj = RequestKeys.deserializerForKey(requestId)(buffer) header: RequestHeader = RequestHeader.parse(buffer) body: Struct = ProtoUtils.currentRequestSchema(apiKey).read(buffer).asInstanceOf[Struct] } else { requestObj = null header: RequestHeader = RequestHeader.parse(buffer) body: Struct = ProtoUtils.currentRequestSchema(apiKey).read(buffer).asInstanceOf[Struct] } This way existing KafkaApis will keep working as normal. The new Apis can implement just the new header/body requests. We'll do the same on the send-side: BoundedByteBufferSend can have a constructor that takes header/body instead of just a response object. Does that make sense? Once we have this in, we can move to: * Adding the missing
Re: [Discussion] Using Client Requests and Responses in Server
Correct, I think we are on the same page. This way we can fix RequestChannel part (where it uses AbstractRequest.getRequest) But would it be okay to add versionId to AbstractRequest.getErrorResponse signature too? I'm a bit lost with all those Abstract... objects hierarchy and not sure whether it's the right solution. Thanks, Andrii Biletskyi On Fri, May 15, 2015 at 3:47 PM, Gwen Shapira gshap...@cloudera.com wrote: I agree, we currently don't handle versions correctly when de-serializing into java objects. This will be an isssue for every req/resp we move to use the java objects. It looks like this requires: 1. Add versionId parameter to all parse functions in Java req/resp objects 2. Modify getRequest to pass it along 3. Modify RequestChannel to get the version out of the header and use it when de-serializing the body. Did I get that correct? I want to make sure we are talking about the same issue. Gwen On Fri, May 15, 2015 at 1:45 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Gwen, I didn't find this in answers above so apologies if this was discussed. It's about the way we would like to handle request versions. As I understood from Jun's answer we generally should try using the same java object while evolving the request. I believe the only example of evolved request now - OffsetCommitRequest follows this approach. I'm trying to evolve MetadataRequest to the next version as part of KIP-4 and not sure current AbstractRequest api (which is a basis for ported to java requests) is sufficient. The problem is: in order to deserialize bytes into correct correct object you need to know it's version. Suppose KafkaApi serves OffsetCommitRequestV0 and V2 (current). For such cases OffsetCommitRequest class has two constructors: public static OffsetCommitRequest parse(ByteBuffer buffer, int versionId) AND public static OffsetCommitRequest parse(ByteBuffer buffer) The latter one will simply pick the current schema version. Now AbstractRequest.getRequest which is an entry point for deserializing request for KafkaApi matches only on RequestHeader.apiKey (and thus uses the second OffsetCommitRequest constructor) which is not sufficient because we also need RequestHeader.apiVersion in case old request version. The same problem appears in AbstractRequest.getErrorResponse(Throwable e) - to construct the right error response object we need to know to which apiVersion to respond. I think this can affect other tasks under KAFKA-1927 - replacing separate RQ/RP, so maybe it makes sense to decide/fix it once. Thanks, Andrii Bieltskyi On Wed, Mar 25, 2015 at 12:42 AM, Gwen Shapira gshap...@cloudera.com wrote: OK, I posted a working patch on KAFKA-2044 and https://reviews.apache.org/r/32459/diff/. There are few decisions there than can be up to discussion (factory method on AbstractRequestResponse, the new handleErrors in request API), but as far as support for o.a.k.common requests in core goes, it does what it needs to do. Please review! Gwen On Tue, Mar 24, 2015 at 10:59 AM, Gwen Shapira gshap...@cloudera.com wrote: Hi, I uploaded a (very) preliminary patch with my idea. One thing thats missing: RequestResponse had handleError method that all requests implemented, typically generating appropriate error Response for the request and sending it along. Its used by KafkaApis to handle all protocol errors for valid requests that are not handled elsewhere. AbstractRequestResponse doesn't have such method. I can, of course, add it. But before I jump into this, I'm wondering if there was another plan on handling Api errors. Gwen On Mon, Mar 23, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote: I think what you are saying is that in RequestChannel, we can start generating header/body for new request types and leave requestObj null. For existing requests, header/body will be null initially. Gradually, we can migrate each type of requests by populating header/body, instead of requestObj. This makes sense to me since it serves two purposes (1) not polluting the code base with duplicated request/response objects for new types of requests and (2) allowing the refactoring of existing requests to be done in smaller pieces. Could you try that approach and perhaps just migrate one existing request type (e.g. HeartBeatRequest) as an example? We probably need to rewind the buffer after reading the requestId when deserializing the header (since the header includes the request id). Thanks, Jun On Mon, Mar 23, 2015 at 4:52 PM, Gwen Shapira gshap...@cloudera.com wrote: I'm thinking of a different approach, that will not fix everything, but will allow
Re: [Discussion] Using Client Requests and Responses in Server
Okay, I can pick that. I'll create sub-task under KAFKA-2044. Thanks, Andrii Biletskyi On Fri, May 15, 2015 at 4:27 PM, Gwen Shapira gshap...@cloudera.com wrote: Agree that you need version in getErrorResponse too (so you'll get the correct error), which means you'll need to add versionId to constructors of every response object... You'll want to keep two interfaces, one with version and one with CURR_VERSION as default, so you won't need to modify every single call... On Fri, May 15, 2015 at 4:03 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Correct, I think we are on the same page. This way we can fix RequestChannel part (where it uses AbstractRequest.getRequest) But would it be okay to add versionId to AbstractRequest.getErrorResponse signature too? I'm a bit lost with all those Abstract... objects hierarchy and not sure whether it's the right solution. Thanks, Andrii Biletskyi On Fri, May 15, 2015 at 3:47 PM, Gwen Shapira gshap...@cloudera.com wrote: I agree, we currently don't handle versions correctly when de-serializing into java objects. This will be an isssue for every req/resp we move to use the java objects. It looks like this requires: 1. Add versionId parameter to all parse functions in Java req/resp objects 2. Modify getRequest to pass it along 3. Modify RequestChannel to get the version out of the header and use it when de-serializing the body. Did I get that correct? I want to make sure we are talking about the same issue. Gwen On Fri, May 15, 2015 at 1:45 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Gwen, I didn't find this in answers above so apologies if this was discussed. It's about the way we would like to handle request versions. As I understood from Jun's answer we generally should try using the same java object while evolving the request. I believe the only example of evolved request now - OffsetCommitRequest follows this approach. I'm trying to evolve MetadataRequest to the next version as part of KIP-4 and not sure current AbstractRequest api (which is a basis for ported to java requests) is sufficient. The problem is: in order to deserialize bytes into correct correct object you need to know it's version. Suppose KafkaApi serves OffsetCommitRequestV0 and V2 (current). For such cases OffsetCommitRequest class has two constructors: public static OffsetCommitRequest parse(ByteBuffer buffer, int versionId) AND public static OffsetCommitRequest parse(ByteBuffer buffer) The latter one will simply pick the current schema version. Now AbstractRequest.getRequest which is an entry point for deserializing request for KafkaApi matches only on RequestHeader.apiKey (and thus uses the second OffsetCommitRequest constructor) which is not sufficient because we also need RequestHeader.apiVersion in case old request version. The same problem appears in AbstractRequest.getErrorResponse(Throwable e) - to construct the right error response object we need to know to which apiVersion to respond. I think this can affect other tasks under KAFKA-1927 - replacing separate RQ/RP, so maybe it makes sense to decide/fix it once. Thanks, Andrii Bieltskyi On Wed, Mar 25, 2015 at 12:42 AM, Gwen Shapira gshap...@cloudera.com wrote: OK, I posted a working patch on KAFKA-2044 and https://reviews.apache.org/r/32459/diff/. There are few decisions there than can be up to discussion (factory method on AbstractRequestResponse, the new handleErrors in request API), but as far as support for o.a.k.common requests in core goes, it does what it needs to do. Please review! Gwen On Tue, Mar 24, 2015 at 10:59 AM, Gwen Shapira gshap...@cloudera.com wrote: Hi, I uploaded a (very) preliminary patch with my idea. One thing thats missing: RequestResponse had handleError method that all requests implemented, typically generating appropriate error Response for the request and sending it along. Its used by KafkaApis to handle all protocol errors for valid requests that are not handled elsewhere. AbstractRequestResponse doesn't have such method. I can, of course, add it. But before I jump into this, I'm wondering if there was another plan on handling Api errors. Gwen On Mon, Mar 23, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote: I think what you are saying is that in RequestChannel, we can start generating header/body for new request types and leave requestObj null. For existing requests, header/body will be null
[jira] [Created] (KAFKA-2195) Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest
Andrii Biletskyi created KAFKA-2195: --- Summary: Add versionId to AbstractRequest.getErrorResponse and AbstractRequest.getRequest Key: KAFKA-2195 URL: https://issues.apache.org/jira/browse/KAFKA-2195 Project: Kafka Issue Type: Sub-task Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi This is needed to support versioning. 1) getRequest: to parse bytes into correct schema you need to know it's version; by default it's the latest version (current_schema) 2) getErrorResponse: after filling map with error codes you need to create respective Response message which dependes on versionId -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (KAFKA-2073) Replace TopicMetadata request/response with o.a.k.requests.metadata
[ https://issues.apache.org/jira/browse/KAFKA-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi reassigned KAFKA-2073: --- Assignee: Andrii Biletskyi (was: Sriharsha Chintalapani) Replace TopicMetadata request/response with o.a.k.requests.metadata --- Key: KAFKA-2073 URL: https://issues.apache.org/jira/browse/KAFKA-2073 Project: Kafka Issue Type: Sub-task Reporter: Gwen Shapira Assignee: Andrii Biletskyi Fix For: 0.8.3 Replace TopicMetadata request/response with o.a.k.requests.metadata. Note, this is more challenging that it appears because while the wire protocol is identical, the objects are completely different. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-2073) Replace TopicMetadata request/response with o.a.k.requests.metadata
[ https://issues.apache.org/jira/browse/KAFKA-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545841#comment-14545841 ] Andrii Biletskyi commented on KAFKA-2073: - [~sriharsha] is there any update on this one? This blocks me in KIP-4 where we will evolve TopicMetadataRequest. If you are not working on in - let me know, I can continue with this ticket. Thanks. Replace TopicMetadata request/response with o.a.k.requests.metadata --- Key: KAFKA-2073 URL: https://issues.apache.org/jira/browse/KAFKA-2073 Project: Kafka Issue Type: Sub-task Reporter: Gwen Shapira Assignee: Sriharsha Chintalapani Fix For: 0.8.3 Replace TopicMetadata request/response with o.a.k.requests.metadata. Note, this is more challenging that it appears because while the wire protocol is identical, the objects are completely different. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Joel, - DecreasePartitionNotAllowed. Yeah, that's kind of subcase of InvalidPartitions... But since client can always request topic metadata and check what exactly is was wrong with Partitions argument, I think, yes, we can remove DecreasePartitionNotAllowed and use InvalidPartitions instead. I'll update KIP accordingly if no objections. - Questions regarding AdminClient. On one of the previous meetings I suggested we wrap up everything in terms of phase 1 (Wire Protocol and message semantics) so AdminClient is out of scope for now. I'll definitely take into account your remarks and suggestions but would rather wait until I finish phase 1 because I believe answers may change by that time. - Yes, correct. Any broker from the cluster will be able to handle Admin requests thus there is no need to add controller discovery info. Maybe it will be part of some separate KIP, as mentioned in KAFKA-1367. Thanks, Andrii Biletskyi On Thu, May 14, 2015 at 1:31 AM, Joel Koshy jjkosh...@gmail.com wrote: Just had a few minor questions before I join the vote thread. Apologies if these have been discussed: - Do we need DecreasePartitionsNotAllowed? i.e., can we just return InvalidPartitions instead? - AdminClient.listTopics: should we allow listing all partitions? Or do you intend for the client to issue listTopics followed by describeTopics? - On returning futurevoid for partition reassignments: do we need to return any future especially since you have the verifyReassignPartitions method? For e.g., what happens if the controller moves? The get should fail right? The client will then need to connect to the new controller and reissue the request but will then get ReassignPartitionsInProgress. So in that case the client any way needs to rely in verifyReassignPartitions. - In past hangouts I think either you/Joe were mentioning the need to locate the controller (and possibly other cluster metadata). It appears we decided to defer this for a future KIP. Correct? Thanks, Joel On Tue, May 05, 2015 at 04:49:27PM +0300, Andrii Biletskyi wrote: Guys, I've updated the wiki to reflect all previously discussed items (regarding the schema only - this is included to phase 1). https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations I think we can have the final discussion today (for phase 1) and in case no new remarks I will start the voting thread. With regards to AlterTopicRequest semantics. I agree with Jun, and I think it's my bad I focused on multiple topics in one request. The same situation is possible in ProduceRequest, Fetch, TopicMetadata and we handle it naturally and in the most transparent way - we put all separate instructions into a map and thus silently ignore duplicates. This also makes Response part simple too - it's just a map Topic-ErrorCode. I think we need to follow the same approach for Alter (and Create, Delete) request. With this we add nothing new in terms of batch requests semantics. Thanks, Andrii Biletskyi
[jira] [Created] (KAFKA-2188) JBOD Support
Andrii Biletskyi created KAFKA-2188: --- Summary: JBOD Support Key: KAFKA-2188 URL: https://issues.apache.org/jira/browse/KAFKA-2188 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi https://cwiki.apache.org/confluence/display/KAFKA/KIP-18+-+JBOD+Support -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Review Request 34103: Patch for KAFKA-2188
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34103/ --- Review request for kafka. Bugs: KAFKA-2188 https://issues.apache.org/jira/browse/KAFKA-2188 Repository: kafka Description --- KAFKA-2188 - JBOD Support Diffs - core/src/main/scala/kafka/cluster/Partition.scala 122b1dbbe45cb27aed79b5be1e735fb617c716b0 core/src/main/scala/kafka/common/GenericKafkaStorageException.scala PRE-CREATION core/src/main/scala/kafka/controller/KafkaController.scala a6351163f5b6f080d6fa50bcc3533d445fcbc067 core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 3b15ab4eef22c6f50a7483e99a6af40fb55aca9f core/src/main/scala/kafka/log/Log.scala 84e7b8fe9dd014884b60c4fbe13c835cf02a40e4 core/src/main/scala/kafka/log/LogManager.scala e781ebac2677ebb22e0c1fef0cf7e5ad57c74ea4 core/src/main/scala/kafka/log/LogSegment.scala ed039539ac18ea4d65144073915cf112f7374631 core/src/main/scala/kafka/server/KafkaApis.scala 417960dd1ab407ebebad8fdb0e97415db3e91a2f core/src/main/scala/kafka/server/KafkaConfig.scala 9efa15ca5567b295ab412ee9eea7c03eb4cdc18b core/src/main/scala/kafka/server/OffsetCheckpoint.scala 8c5b0546908d3b3affb9f48e2ece9ed252518783 core/src/main/scala/kafka/server/ReplicaFetcherThread.scala b31b432a226ba79546dd22ef1d2acbb439c2e9a3 core/src/main/scala/kafka/server/ReplicaManager.scala 59c9bc3ac3a8afc07a6f8c88c5871304db588d17 core/src/main/scala/kafka/utils/ZkUtils.scala 1da8f90b3a7abda5868186bddf221e31adbe02ce core/src/test/scala/unit/kafka/log/LogManagerTest.scala 01dfbc4f8d21f6905327cd4ed6c61d657adc0143 core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala 60cd8249e6ec03349e20bb0a7226ea9cd66e6b17 core/src/test/scala/unit/kafka/utils/TestUtils.scala faae0e907596a16c47e8d49a82b6a3c82797c96d Diff: https://reviews.apache.org/r/34103/diff/ Testing --- Thanks, Andrii Biletskyi
[jira] [Commented] (KAFKA-2188) JBOD Support
[ https://issues.apache.org/jira/browse/KAFKA-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539777#comment-14539777 ] Andrii Biletskyi commented on KAFKA-2188: - Created reviewboard https://reviews.apache.org/r/34103/diff/ against branch origin/trunk JBOD Support Key: KAFKA-2188 URL: https://issues.apache.org/jira/browse/KAFKA-2188 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2188.patch https://cwiki.apache.org/confluence/display/KAFKA/KIP-18+-+JBOD+Support -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2188) JBOD Support
[ https://issues.apache.org/jira/browse/KAFKA-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2188: Status: Patch Available (was: Open) JBOD Support Key: KAFKA-2188 URL: https://issues.apache.org/jira/browse/KAFKA-2188 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2188.patch https://cwiki.apache.org/confluence/display/KAFKA/KIP-18+-+JBOD+Support -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-2188) JBOD Support
[ https://issues.apache.org/jira/browse/KAFKA-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-2188: Attachment: KAFKA-2188.patch JBOD Support Key: KAFKA-2188 URL: https://issues.apache.org/jira/browse/KAFKA-2188 Project: Kafka Issue Type: Bug Reporter: Andrii Biletskyi Assignee: Andrii Biletskyi Attachments: KAFKA-2188.patch https://cwiki.apache.org/confluence/display/KAFKA/KIP-18+-+JBOD+Support -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [VOTE] KIP-4 Admin Commands / Phase-1
Jun, 1. My bad, updated description, but forgot the schema itself. Fixed. 2. Okay, changed that. 3. Sure, we'll evolve TMR again once needed. Thanks, Andrii Biletskyi On Thu, May 7, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote: Hi, Andrii, A few more comments. 1. We need to remove isr and replicaLags from TMR, right? 2. For TMR v1, we decided to keep the same auto topic creation logic as v0 initially. We can drop the auto topic creation logic later after the producer client starts using createTopicRequest. 3. For the configs returned in TMR, we can keep this for now. We may need some adjustment later depending on the outcome of KIP-21(configuration management). Other than those, this looks good. Thanks, Jun On Tue, May 5, 2015 at 11:16 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Hi all, This is a voting thread for KIP-4 Phase-1. It will include Wire protocol changes and server side handling code. https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations Thanks, Andrii Biletskyi
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Guys, I've updated the wiki to reflect all previously discussed items (regarding the schema only - this is included to phase 1). https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations I think we can have the final discussion today (for phase 1) and in case no new remarks I will start the voting thread. With regards to AlterTopicRequest semantics. I agree with Jun, and I think it's my bad I focused on multiple topics in one request. The same situation is possible in ProduceRequest, Fetch, TopicMetadata and we handle it naturally and in the most transparent way - we put all separate instructions into a map and thus silently ignore duplicates. This also makes Response part simple too - it's just a map Topic-ErrorCode. I think we need to follow the same approach for Alter (and Create, Delete) request. With this we add nothing new in terms of batch requests semantics. Thanks, Andrii Biletskyi On Thu, Apr 30, 2015 at 4:31 PM, Jun Rao j...@confluent.io wrote: The following is a description of some of my concerns on allowing the same topic multiple times in AlterTopicRequest. ATP has an array of entries, each corresponding to a topic. We allow multiple changes to a topic in a single entry. Those changes may fail to apply independently (e.g., the config change may succeed, but the replica assignment change may fail). If there is an issue applying one of the changes, we will set an error code for that entry in the response. If we allow the same topic to be specified multiple times in ATR, it can happen that the first entry succeeds, but the second entry fails partially. Now, from the admin's perspective, it's a bit hard to do the verification. Ideally, you want to wait for the changes in the first entry to be applied. However, the second entry may have part of the changes applied successfully. About putting restrictions on the requests. Currently, we effectively expect a topic-partition to be only specified once in the FetchRequest. Allowing the same topic-partition to be specified multiple times in FetchRequest will be confusing and complicates the implementation (e.g., putting the request in purgatory). A few other requests probably have similar implicit assumptions on topic or topic-partition being unique in each request. Thanks, Jun On Tue, Apr 28, 2015 at 5:26 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, A quick summary of our today's meeting. There were no additional issues/questions. The only item about which we are not 100% sure is multiple instructions for one topic in one request case. It was proposed by Jun to explain reasons behind not allowing users doing that again here in mailing list, and in case we implement it in final version document it well so API clients understand what exactly is not allowed and why. At the meantime I will update the KIP. After that I will start voting thread. Thanks, Andrii Biletskyi On Tue, Apr 28, 2015 at 10:33 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, It seems that there are no open questions left so prior to our weekly call let me summarize what I'm going to implement as part of phase one for KIP-4. 1. Add 3 new Wire Protocol requests - Create-, Alter- and DeleteTopicRequest 2. Topic requests are batch requests, errors are returned per topic as part of batch response. 3. Topic requests are asynchronous - respective commands are only started and server is not blocked until command is finished. 4. It will be not allowed to specify multiple mutations for the same topic in scope of one batch request - a special error will be returned for such topic. 5. There will be no dedicated request for reassign-partitions - it is simulated with AlterTopicRequest.ReplicaAssignment field. 6. Preferred-replica-leader-election is not supported since there is no need to have a public API to trigger such operation. 7. TopicMetadataReqeust will be evolved to version 1 - topic-level configuration per topic will be included and ISR field will be removed. Automatic topic-creation logic will be removed (we will use CreateTopicRequest for that). Thanks, Andrii Biletskyi On Tue, Apr 28, 2015 at 12:23 AM, Jun Rao j...@confluent.io wrote: Yes, to verify if a partition reassignment completes or not, we just need to make sure AR == RAR. So, we don't need ISR for this. It's probably still useful to know ISR for monitoring in general though. Thanks, Jun On Mon, Apr 27, 2015 at 4:15 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Okay, I had some doubts in terms of reassign-partitions case. I was not sure whether we need ISR to check post condition of partition reassignment. But I think we can rely on assigned replicas - the workflow
[VOTE] KIP-4 Admin Commands / Phase-1
Hi all, This is a voting thread for KIP-4 Phase-1. It will include Wire protocol changes and server side handling code. https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations Thanks, Andrii Biletskyi
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Guys, A quick summary of our today's meeting. There were no additional issues/questions. The only item about which we are not 100% sure is multiple instructions for one topic in one request case. It was proposed by Jun to explain reasons behind not allowing users doing that again here in mailing list, and in case we implement it in final version document it well so API clients understand what exactly is not allowed and why. At the meantime I will update the KIP. After that I will start voting thread. Thanks, Andrii Biletskyi On Tue, Apr 28, 2015 at 10:33 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, It seems that there are no open questions left so prior to our weekly call let me summarize what I'm going to implement as part of phase one for KIP-4. 1. Add 3 new Wire Protocol requests - Create-, Alter- and DeleteTopicRequest 2. Topic requests are batch requests, errors are returned per topic as part of batch response. 3. Topic requests are asynchronous - respective commands are only started and server is not blocked until command is finished. 4. It will be not allowed to specify multiple mutations for the same topic in scope of one batch request - a special error will be returned for such topic. 5. There will be no dedicated request for reassign-partitions - it is simulated with AlterTopicRequest.ReplicaAssignment field. 6. Preferred-replica-leader-election is not supported since there is no need to have a public API to trigger such operation. 7. TopicMetadataReqeust will be evolved to version 1 - topic-level configuration per topic will be included and ISR field will be removed. Automatic topic-creation logic will be removed (we will use CreateTopicRequest for that). Thanks, Andrii Biletskyi On Tue, Apr 28, 2015 at 12:23 AM, Jun Rao j...@confluent.io wrote: Yes, to verify if a partition reassignment completes or not, we just need to make sure AR == RAR. So, we don't need ISR for this. It's probably still useful to know ISR for monitoring in general though. Thanks, Jun On Mon, Apr 27, 2015 at 4:15 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Okay, I had some doubts in terms of reassign-partitions case. I was not sure whether we need ISR to check post condition of partition reassignment. But I think we can rely on assigned replicas - the workflow in reassignPartitions is the following: 1. Update AR in ZK with OAR + RAR. ... 10. Update AR in ZK with RAR. 11. Update the /admin/reassign_partitions path in ZK to remove this partition. 12. After electing leader, the replicas and isr information changes. So resend the update metadata request to every broker. In other words AR becomes RAR right before removing partitions from the admin path. I think we can consider (with a little approximation) reassignment completed if AR == RAR. If it's okay, I will remove ISR and add topic config in one change as discussed earlier. Thanks, Andrii Biletskyi On Mon, Apr 27, 2015 at 1:50 AM, Jun Rao j...@confluent.io wrote: Andrii, Another thing. We decided not to add the lag info in TMR. To be consistent, we probably also want to remove ISR from TMR since only the leader knows it. We can punt on adding any new request from getting ISR. ISR is mostly useful for monitoring. Currently, one can determine if a replica is in ISR from the lag metrics (a replica is in ISR if its lag is =0). Thanks, Jun On Sun, Apr 26, 2015 at 4:31 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Jun, I like your approach to AlterTopicReques semantics! Sounds like we linearize all request fields to ReplicaAssignment - I will definitely try this out to ensure there are no other pitfalls. With regards to multiple instructions in one batch per topic. For me this sounds reasonable too. We discussed last time that it's pretty strange we give users schema that supports batching and at the same time introduce restrictions to the way batching can be used (in this case - only one instruction per topic). But now, when we give users everything they need to avoid such misleading use cases (if we implement the previous item - user will be able to specify/change all fields in one instruction) - it might be a good justification to prohibit serving such requests. Any objections? Thanks, Andrii BIletskyi On Sun, Apr 26, 2015 at 11:00 PM, Jun Rao j...@confluent.io wrote: Andrii, Thanks for the update. For your second point, I agree that if a single AlterTopicRequest can make multiple changes, there is no need to support the same topic included more than once in the request. Now about the semantics in your first question. I was thinking that we can do the following. a. If ReplicaAssignment is specified
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Guys, It seems that there are no open questions left so prior to our weekly call let me summarize what I'm going to implement as part of phase one for KIP-4. 1. Add 3 new Wire Protocol requests - Create-, Alter- and DeleteTopicRequest 2. Topic requests are batch requests, errors are returned per topic as part of batch response. 3. Topic requests are asynchronous - respective commands are only started and server is not blocked until command is finished. 4. It will be not allowed to specify multiple mutations for the same topic in scope of one batch request - a special error will be returned for such topic. 5. There will be no dedicated request for reassign-partitions - it is simulated with AlterTopicRequest.ReplicaAssignment field. 6. Preferred-replica-leader-election is not supported since there is no need to have a public API to trigger such operation. 7. TopicMetadataReqeust will be evolved to version 1 - topic-level configuration per topic will be included and ISR field will be removed. Automatic topic-creation logic will be removed (we will use CreateTopicRequest for that). Thanks, Andrii Biletskyi On Tue, Apr 28, 2015 at 12:23 AM, Jun Rao j...@confluent.io wrote: Yes, to verify if a partition reassignment completes or not, we just need to make sure AR == RAR. So, we don't need ISR for this. It's probably still useful to know ISR for monitoring in general though. Thanks, Jun On Mon, Apr 27, 2015 at 4:15 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Okay, I had some doubts in terms of reassign-partitions case. I was not sure whether we need ISR to check post condition of partition reassignment. But I think we can rely on assigned replicas - the workflow in reassignPartitions is the following: 1. Update AR in ZK with OAR + RAR. ... 10. Update AR in ZK with RAR. 11. Update the /admin/reassign_partitions path in ZK to remove this partition. 12. After electing leader, the replicas and isr information changes. So resend the update metadata request to every broker. In other words AR becomes RAR right before removing partitions from the admin path. I think we can consider (with a little approximation) reassignment completed if AR == RAR. If it's okay, I will remove ISR and add topic config in one change as discussed earlier. Thanks, Andrii Biletskyi On Mon, Apr 27, 2015 at 1:50 AM, Jun Rao j...@confluent.io wrote: Andrii, Another thing. We decided not to add the lag info in TMR. To be consistent, we probably also want to remove ISR from TMR since only the leader knows it. We can punt on adding any new request from getting ISR. ISR is mostly useful for monitoring. Currently, one can determine if a replica is in ISR from the lag metrics (a replica is in ISR if its lag is =0). Thanks, Jun On Sun, Apr 26, 2015 at 4:31 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Jun, I like your approach to AlterTopicReques semantics! Sounds like we linearize all request fields to ReplicaAssignment - I will definitely try this out to ensure there are no other pitfalls. With regards to multiple instructions in one batch per topic. For me this sounds reasonable too. We discussed last time that it's pretty strange we give users schema that supports batching and at the same time introduce restrictions to the way batching can be used (in this case - only one instruction per topic). But now, when we give users everything they need to avoid such misleading use cases (if we implement the previous item - user will be able to specify/change all fields in one instruction) - it might be a good justification to prohibit serving such requests. Any objections? Thanks, Andrii BIletskyi On Sun, Apr 26, 2015 at 11:00 PM, Jun Rao j...@confluent.io wrote: Andrii, Thanks for the update. For your second point, I agree that if a single AlterTopicRequest can make multiple changes, there is no need to support the same topic included more than once in the request. Now about the semantics in your first question. I was thinking that we can do the following. a. If ReplicaAssignment is specified, we expect that this will specify the replica assignment for all partitions in the topic. For now, we can have the constraint that there could be more partitions than existing ones, but can't be less. In this case, both partitions and replicas are ignored. Then for each partition, we do one of the followings. a1. If the partition doesn't exist, add the partition with the replica assignment directly to the topic path in ZK. a2. If the partition exists and the new replica assignment is not the same as the existing one, include it in the reassign partition json. If the json
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Okay, I had some doubts in terms of reassign-partitions case. I was not sure whether we need ISR to check post condition of partition reassignment. But I think we can rely on assigned replicas - the workflow in reassignPartitions is the following: 1. Update AR in ZK with OAR + RAR. ... 10. Update AR in ZK with RAR. 11. Update the /admin/reassign_partitions path in ZK to remove this partition. 12. After electing leader, the replicas and isr information changes. So resend the update metadata request to every broker. In other words AR becomes RAR right before removing partitions from the admin path. I think we can consider (with a little approximation) reassignment completed if AR == RAR. If it's okay, I will remove ISR and add topic config in one change as discussed earlier. Thanks, Andrii Biletskyi On Mon, Apr 27, 2015 at 1:50 AM, Jun Rao j...@confluent.io wrote: Andrii, Another thing. We decided not to add the lag info in TMR. To be consistent, we probably also want to remove ISR from TMR since only the leader knows it. We can punt on adding any new request from getting ISR. ISR is mostly useful for monitoring. Currently, one can determine if a replica is in ISR from the lag metrics (a replica is in ISR if its lag is =0). Thanks, Jun On Sun, Apr 26, 2015 at 4:31 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Jun, I like your approach to AlterTopicReques semantics! Sounds like we linearize all request fields to ReplicaAssignment - I will definitely try this out to ensure there are no other pitfalls. With regards to multiple instructions in one batch per topic. For me this sounds reasonable too. We discussed last time that it's pretty strange we give users schema that supports batching and at the same time introduce restrictions to the way batching can be used (in this case - only one instruction per topic). But now, when we give users everything they need to avoid such misleading use cases (if we implement the previous item - user will be able to specify/change all fields in one instruction) - it might be a good justification to prohibit serving such requests. Any objections? Thanks, Andrii BIletskyi On Sun, Apr 26, 2015 at 11:00 PM, Jun Rao j...@confluent.io wrote: Andrii, Thanks for the update. For your second point, I agree that if a single AlterTopicRequest can make multiple changes, there is no need to support the same topic included more than once in the request. Now about the semantics in your first question. I was thinking that we can do the following. a. If ReplicaAssignment is specified, we expect that this will specify the replica assignment for all partitions in the topic. For now, we can have the constraint that there could be more partitions than existing ones, but can't be less. In this case, both partitions and replicas are ignored. Then for each partition, we do one of the followings. a1. If the partition doesn't exist, add the partition with the replica assignment directly to the topic path in ZK. a2. If the partition exists and the new replica assignment is not the same as the existing one, include it in the reassign partition json. If the json is not empty, write it to the reassignment path in ZK to trigger partition reassignment. b. Otherwise, if replicas is specified, generate new ReplicaAssignment for existing partitions. If partitions is specified (assuming it's larger), generate ReplicaAssignment for the new partitions as well. Then go back to step a to make a decision. c. Otherwise, if only partitions is specified, add assignments of existing partitions to ReplicaAssignment. Generate assignments to the new partitions and add them to ReplicaAssignment. Then go back to step a to make a decision. Thanks, Jun On Sat, Apr 25, 2015 at 7:21 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, Can we come to some agreement in terms of the second item from the email above? This blocks me from updating and uploading the patch. Also the new schedule for the weekly calls doesn't work very well for me - it's 1 am in my timezone :) - so I'd rather we confirm everything that is possible by email. Thanks, Andrii Biletskyi On Wed, Apr 22, 2015 at 5:50 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: As said above, I spent some time thinking about AlterTopicRequest semantics and batching. Firstly, about AlterTopicRequest. Our goal here is to see whether we can suggest some simple semantics and at the same time let users change different things in one instruction (hereinafter instruction - is one of the entries in batch request). We can resolve arguments according to this schema: 1) If ReplicaAsignment is specified: it's a reassign partitions
[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14514763#comment-14514763 ] Andrii Biletskyi commented on KAFKA-1367: - [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time. Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14514760#comment-14514760 ] Andrii Biletskyi commented on KAFKA-1367: - [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time. Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14514762#comment-14514762 ] Andrii Biletskyi commented on KAFKA-1367: - [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time. Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14514764#comment-14514764 ] Andrii Biletskyi commented on KAFKA-1367: - [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time. Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-1367: Comment: was deleted (was: [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time.) Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14514765#comment-14514765 ] Andrii Biletskyi commented on KAFKA-1367: - [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time. Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14514761#comment-14514761 ] Andrii Biletskyi commented on KAFKA-1367: - [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time. Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-1367: Comment: was deleted (was: [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time.) Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-1367: Comment: was deleted (was: [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time.) Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-1367: Comment: was deleted (was: [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time.) Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Biletskyi updated KAFKA-1367: Comment: was deleted (was: [~jjkoshy] Yes, it appears that topic commands don't require ISR information so it was proposed to remove it at all from the TopicMetadataRequest. There was an idea to create some sort of BrokerMetadataRequest which will include correct topic metadata but since it's not related to KIP-4 directly it won't be a part of it. So everyone is welcome to create a separate KIP for it. Atleast to this conclusion we came last time.) Broker topic metadata not kept in sync with ZooKeeper - Key: KAFKA-1367 URL: https://issues.apache.org/jira/browse/KAFKA-1367 Project: Kafka Issue Type: Bug Affects Versions: 0.8.0, 0.8.1 Reporter: Ryan Berdeen Assignee: Ashish K Singh Labels: newbie++ Fix For: 0.8.3 Attachments: KAFKA-1367.txt When a broker is restarted, the topic metadata responses from the brokers will be incorrect (different from ZooKeeper) until a preferred replica leader election. In the metadata, it looks like leaders are correctly removed from the ISR when a broker disappears, but followers are not. Then, when a broker reappears, the ISR is never updated. I used a variation of the Vagrant setup created by Joe Stein to reproduce this with latest from the 0.8.1 branch: https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Jun, I like your approach to AlterTopicReques semantics! Sounds like we linearize all request fields to ReplicaAssignment - I will definitely try this out to ensure there are no other pitfalls. With regards to multiple instructions in one batch per topic. For me this sounds reasonable too. We discussed last time that it's pretty strange we give users schema that supports batching and at the same time introduce restrictions to the way batching can be used (in this case - only one instruction per topic). But now, when we give users everything they need to avoid such misleading use cases (if we implement the previous item - user will be able to specify/change all fields in one instruction) - it might be a good justification to prohibit serving such requests. Any objections? Thanks, Andrii BIletskyi On Sun, Apr 26, 2015 at 11:00 PM, Jun Rao j...@confluent.io wrote: Andrii, Thanks for the update. For your second point, I agree that if a single AlterTopicRequest can make multiple changes, there is no need to support the same topic included more than once in the request. Now about the semantics in your first question. I was thinking that we can do the following. a. If ReplicaAssignment is specified, we expect that this will specify the replica assignment for all partitions in the topic. For now, we can have the constraint that there could be more partitions than existing ones, but can't be less. In this case, both partitions and replicas are ignored. Then for each partition, we do one of the followings. a1. If the partition doesn't exist, add the partition with the replica assignment directly to the topic path in ZK. a2. If the partition exists and the new replica assignment is not the same as the existing one, include it in the reassign partition json. If the json is not empty, write it to the reassignment path in ZK to trigger partition reassignment. b. Otherwise, if replicas is specified, generate new ReplicaAssignment for existing partitions. If partitions is specified (assuming it's larger), generate ReplicaAssignment for the new partitions as well. Then go back to step a to make a decision. c. Otherwise, if only partitions is specified, add assignments of existing partitions to ReplicaAssignment. Generate assignments to the new partitions and add them to ReplicaAssignment. Then go back to step a to make a decision. Thanks, Jun On Sat, Apr 25, 2015 at 7:21 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, Can we come to some agreement in terms of the second item from the email above? This blocks me from updating and uploading the patch. Also the new schedule for the weekly calls doesn't work very well for me - it's 1 am in my timezone :) - so I'd rather we confirm everything that is possible by email. Thanks, Andrii Biletskyi On Wed, Apr 22, 2015 at 5:50 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: As said above, I spent some time thinking about AlterTopicRequest semantics and batching. Firstly, about AlterTopicRequest. Our goal here is to see whether we can suggest some simple semantics and at the same time let users change different things in one instruction (hereinafter instruction - is one of the entries in batch request). We can resolve arguments according to this schema: 1) If ReplicaAsignment is specified: it's a reassign partitions request 2) If either Partitions or ReplicationFactor is specified: a) If Partitions specified - this is increase partitions case b) If ReplicationFactor is specified - this means we need to automatically regenerate replica assignment and treat it as reassign partitions request Note: this algorithm is a bit inconsistent with the CreateTopicRequest - with ReplicaAssignment specified there user can implicitly define Partitions and ReplicationFactor, in AlterTopicRequest those are completely different things, i.e. you can't include new partitions to the ReplicaAssignment to implicitly ask controller to increase partitions - controller will simply return InvalidReplicaAssignment, because you included unknown partitions. Secondly, multiple instructions for one topic in batch request. I have a feeling it becomes a really big mess now, so suggestions are highly appreciated here! Our goal is to consider whether we can let users add multiple instructions for one topic in one batch but at the same time make it transparent enough so we can support blocking on request completion, for that we need to analyze from the request what is the final expected state of the topic. And the latter one seems to me a tough issue. Consider the following AlterTopicRequest: [1) topic1: change ReplicationFactor from 2 to 3, 2) topic1: change ReplicaAssignment (taking into account RF is 3 now), 3) topic2: change ReplicaAssignment (just to include multiple
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Guys, Can we come to some agreement in terms of the second item from the email above? This blocks me from updating and uploading the patch. Also the new schedule for the weekly calls doesn't work very well for me - it's 1 am in my timezone :) - so I'd rather we confirm everything that is possible by email. Thanks, Andrii Biletskyi On Wed, Apr 22, 2015 at 5:50 PM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: As said above, I spent some time thinking about AlterTopicRequest semantics and batching. Firstly, about AlterTopicRequest. Our goal here is to see whether we can suggest some simple semantics and at the same time let users change different things in one instruction (hereinafter instruction - is one of the entries in batch request). We can resolve arguments according to this schema: 1) If ReplicaAsignment is specified: it's a reassign partitions request 2) If either Partitions or ReplicationFactor is specified: a) If Partitions specified - this is increase partitions case b) If ReplicationFactor is specified - this means we need to automatically regenerate replica assignment and treat it as reassign partitions request Note: this algorithm is a bit inconsistent with the CreateTopicRequest - with ReplicaAssignment specified there user can implicitly define Partitions and ReplicationFactor, in AlterTopicRequest those are completely different things, i.e. you can't include new partitions to the ReplicaAssignment to implicitly ask controller to increase partitions - controller will simply return InvalidReplicaAssignment, because you included unknown partitions. Secondly, multiple instructions for one topic in batch request. I have a feeling it becomes a really big mess now, so suggestions are highly appreciated here! Our goal is to consider whether we can let users add multiple instructions for one topic in one batch but at the same time make it transparent enough so we can support blocking on request completion, for that we need to analyze from the request what is the final expected state of the topic. And the latter one seems to me a tough issue. Consider the following AlterTopicRequest: [1) topic1: change ReplicationFactor from 2 to 3, 2) topic1: change ReplicaAssignment (taking into account RF is 3 now), 3) topic2: change ReplicaAssignment (just to include multiple topics) 4) topic1: change ReplicationFactor from 3 to 1, 5) topic1: change ReplicaAssignment again (taking into account RF is 1 now) ] As we discussed earlier, controller will handle it as alter-topic command and reassign-partitions. First of all, it will scan all ReplicaAssignment and assembly those to one json to create admin path /reassign_partitions once needed. Now, user would expect we execute instruction sequentially, but we can't do it because only one reassign-partitions procedure can be in progress - when should we trigger reassign-partition - after 1) or after 4)? And what about topic2 - we will break the order, but it was supposed we execute instructions sequentially. Overall, the logic seems to be very sophisticated, which is a bad sign. Conceptually, I think the root problem is that we imply there is an order in sequential instructions, but instructions themselves are asynchronous, so really you can't guarantee any order. I'm thinking about such solutions now: 1) Prohibit multiple instructions (this seems reasonable if we let users change multiple things in scope of now instructions - see the first item) 2) Break apart again AlterTopic and ReassignPartitions - if the reassignment case is the only problem here, which I'm not sure about. Thoughts? Thanks, Andrii Biletskyi On Wed, Apr 22, 2015 at 2:59 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, Thank you for your time. A short summary of our discussion. Answering previous items: 1. 2. I will double check existing error codes to align the list of errors that needs to be added. 3. We agreed to think again about the batch requests semantics. The main concern is that users would expect we allow executing multiple instructions for one topic in one batch. I will start implementation and check whether there are any impediments to handle it this way. The same for AlterTopicRequest - I will try to make request semantics as easy as possible and allow users change different things at one time - e.g. change nr of partitions and replicas in one instruction. 4. We agreed not to add to TMR lag information. 5. We discussed preferred replica command and it was pointed out that generally users shouldn't call this command manually now since this is automatically handled by the cluster. If there are no objections (especially from devops people) I will remove respective request. 6. As discussed AdminClient API is a phase 2 and will go after Wire Protocol extensions. It will be finalized as java-doc after I complete patch for phase 1 - Wire
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
As said above, I spent some time thinking about AlterTopicRequest semantics and batching. Firstly, about AlterTopicRequest. Our goal here is to see whether we can suggest some simple semantics and at the same time let users change different things in one instruction (hereinafter instruction - is one of the entries in batch request). We can resolve arguments according to this schema: 1) If ReplicaAsignment is specified: it's a reassign partitions request 2) If either Partitions or ReplicationFactor is specified: a) If Partitions specified - this is increase partitions case b) If ReplicationFactor is specified - this means we need to automatically regenerate replica assignment and treat it as reassign partitions request Note: this algorithm is a bit inconsistent with the CreateTopicRequest - with ReplicaAssignment specified there user can implicitly define Partitions and ReplicationFactor, in AlterTopicRequest those are completely different things, i.e. you can't include new partitions to the ReplicaAssignment to implicitly ask controller to increase partitions - controller will simply return InvalidReplicaAssignment, because you included unknown partitions. Secondly, multiple instructions for one topic in batch request. I have a feeling it becomes a really big mess now, so suggestions are highly appreciated here! Our goal is to consider whether we can let users add multiple instructions for one topic in one batch but at the same time make it transparent enough so we can support blocking on request completion, for that we need to analyze from the request what is the final expected state of the topic. And the latter one seems to me a tough issue. Consider the following AlterTopicRequest: [1) topic1: change ReplicationFactor from 2 to 3, 2) topic1: change ReplicaAssignment (taking into account RF is 3 now), 3) topic2: change ReplicaAssignment (just to include multiple topics) 4) topic1: change ReplicationFactor from 3 to 1, 5) topic1: change ReplicaAssignment again (taking into account RF is 1 now) ] As we discussed earlier, controller will handle it as alter-topic command and reassign-partitions. First of all, it will scan all ReplicaAssignment and assembly those to one json to create admin path /reassign_partitions once needed. Now, user would expect we execute instruction sequentially, but we can't do it because only one reassign-partitions procedure can be in progress - when should we trigger reassign-partition - after 1) or after 4)? And what about topic2 - we will break the order, but it was supposed we execute instructions sequentially. Overall, the logic seems to be very sophisticated, which is a bad sign. Conceptually, I think the root problem is that we imply there is an order in sequential instructions, but instructions themselves are asynchronous, so really you can't guarantee any order. I'm thinking about such solutions now: 1) Prohibit multiple instructions (this seems reasonable if we let users change multiple things in scope of now instructions - see the first item) 2) Break apart again AlterTopic and ReassignPartitions - if the reassignment case is the only problem here, which I'm not sure about. Thoughts? Thanks, Andrii Biletskyi On Wed, Apr 22, 2015 at 2:59 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, Thank you for your time. A short summary of our discussion. Answering previous items: 1. 2. I will double check existing error codes to align the list of errors that needs to be added. 3. We agreed to think again about the batch requests semantics. The main concern is that users would expect we allow executing multiple instructions for one topic in one batch. I will start implementation and check whether there are any impediments to handle it this way. The same for AlterTopicRequest - I will try to make request semantics as easy as possible and allow users change different things at one time - e.g. change nr of partitions and replicas in one instruction. 4. We agreed not to add to TMR lag information. 5. We discussed preferred replica command and it was pointed out that generally users shouldn't call this command manually now since this is automatically handled by the cluster. If there are no objections (especially from devops people) I will remove respective request. 6. As discussed AdminClient API is a phase 2 and will go after Wire Protocol extensions. It will be finalized as java-doc after I complete patch for phase 1 - Wire Protocol + server-side code handling requests. Thanks, Andrii Biletskyi On Wed, Apr 22, 2015 at 12:36 AM, Jay Kreps jay.kr...@gmail.com wrote: Hey Andrii, thanks for all the hard work on this, it has come a long way. A couple questions and comments on this. For the errors, can we do the following: 1. Remove IllegalArgument from the name, we haven't used that convention for other errors. 2. Normalize this list with the existing errors. For example, elsewhere when you give an invalid
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
1. Yes, this will be much easier. Okay, let's add it. 2, Okay. This will differ a little bit from the way currently kafka-topics.sh handles alter-topic command, but I think it's a reasonable restriction. I'll update KIP acordingly to our weekly call. Thanks, Andrii Biletskyi On Mon, Apr 20, 2015 at 10:56 PM, Jun Rao j...@confluent.io wrote: 1. Yes, lag is probably only going to be useful for the admin client. However, so is isr. It seems to me that we should get lag and isr from the same request. I was thinking that we can just extend TMR by changing replicas from an array of int to an array of (int, lag) pairs. Is that too complicated? 3. I was thinking that we just don't allow the cli to change more than one thing at a time. So, you will get an error if you want to change both partitions and configs. Thanks, Jun On Sun, Apr 19, 2015 at 8:22 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Jun, 1. Yes, seems we can add lag info to the TMR. But before that I wonder whether there are other reasons we need this info except for reassign partition command? As we discussed earlier the problem with poor monitoring capabilities for reassign-partitions (as currently we only inform users Completed/In Progress per partition) may require separate solution. We were thinking about separate Wire protocol request. And I actually like your idea about adding some sort of BrokerMetadataRequest for these purposes. I actually think we can cover some other items (like rack-awareness) but for me it deserves a separate KIP really. Also, adding Replica-Lag map per partition will make TopicMetadataResponse very sophisticated: Map[TopicName, Map[PartitionId, Map[ReplicaId, Lag]]. Maybe we need to leave it for a moment and propose new request rather than making a new step towards one monster request. 2. Yes, error per topic. The only question is whether we should execute at least the very first alter topic command from the duplicated topic set or return error for all ... I think the more predictable and reasonable option for clients would be returning errors for all duplicated topics. 3. Hm, yes. Actually we also have change topic config there. But it is not related to such replication commands as increase replicas or change replica assignment. This will make CLI implementation a bit strange: if user specifies increase partitions and change topic config in one line - taking into account 2. we will have to create two separate alter topic requests, which were designed as batch requests :), but probably we can live with it. Okay, I will think about a separate error code to cover such cases. 4. We will need InvalidArgumentTopic (e.g. contains prohibited chars), IAPartitions, IAReplicas, IAReplicaAssignment, IATopicConfiguration. A server side implementation will be a little bit messy (like dozens if this then this error code) but maybe we should think about clients at the first place here. Thanks, Andrii Biletskyi On Fri, Apr 17, 2015 at 1:46 AM, Jun Rao j...@confluent.io wrote: 1. For the lags, we can add a new field lags per partition. It will return for each replica that's not in isr, the replica id and the lag in messages. Also, if TMR is sent to a non-leader, the response can just include an empty array for isr and lags. 2. So, we will just return a topic level error for the duplicated topics, right? 3. Yes, it's true that today, one can specify both partitions and replicaAssignment in the TopicCommand. However, partitions is actually ignored. So, it will be clearer if we don't allow users to do this. 4. How many specific error codes like InvalidPartitions and InvalidReplicas are needed? If it's not that many, giving out more specific error will be useful for non-java clients. Thanks, Jun On Wed, Apr 15, 2015 at 10:23 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, Thanks for the discussion! Summary: 1. Q: How KAFKA-1367 (isr is inconsistent in brokers' metadata cache) can affect implementation? A: We can fix this issue for the leading broker - ReplicaManager (or Partition) component should have accurate isr list, then with leading broker having correct info, to do a describe-topic we will need to define leading brokers for partitions and ask those for a correct isr list. Also, we should consider adding lag information to TMR for each follower for partition reassignment, as Jun suggested above. 2. Q: What if user adds different alter commands for the same topic in scope of one batch request? A: Because of the async nature of AlterTopicRequest it will be very hard then to assemble the expected (in terms of checking
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Hi all, I've updated KIP-4 page to include all previously discussed items such as: new error codes, merged alter-topic and reassign-partitions requests, added TMR_V1. It'd be great if we concentrate on the Errors+Wire Protocol schema and discuss any remaining issues today, since first patch will include only server-side implementation. Thanks, Andrii Biletskyi On Tue, Apr 21, 2015 at 9:46 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: 1. Yes, this will be much easier. Okay, let's add it. 2, Okay. This will differ a little bit from the way currently kafka-topics.sh handles alter-topic command, but I think it's a reasonable restriction. I'll update KIP acordingly to our weekly call. Thanks, Andrii Biletskyi On Mon, Apr 20, 2015 at 10:56 PM, Jun Rao j...@confluent.io wrote: 1. Yes, lag is probably only going to be useful for the admin client. However, so is isr. It seems to me that we should get lag and isr from the same request. I was thinking that we can just extend TMR by changing replicas from an array of int to an array of (int, lag) pairs. Is that too complicated? 3. I was thinking that we just don't allow the cli to change more than one thing at a time. So, you will get an error if you want to change both partitions and configs. Thanks, Jun On Sun, Apr 19, 2015 at 8:22 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Jun, 1. Yes, seems we can add lag info to the TMR. But before that I wonder whether there are other reasons we need this info except for reassign partition command? As we discussed earlier the problem with poor monitoring capabilities for reassign-partitions (as currently we only inform users Completed/In Progress per partition) may require separate solution. We were thinking about separate Wire protocol request. And I actually like your idea about adding some sort of BrokerMetadataRequest for these purposes. I actually think we can cover some other items (like rack-awareness) but for me it deserves a separate KIP really. Also, adding Replica-Lag map per partition will make TopicMetadataResponse very sophisticated: Map[TopicName, Map[PartitionId, Map[ReplicaId, Lag]]. Maybe we need to leave it for a moment and propose new request rather than making a new step towards one monster request. 2. Yes, error per topic. The only question is whether we should execute at least the very first alter topic command from the duplicated topic set or return error for all ... I think the more predictable and reasonable option for clients would be returning errors for all duplicated topics. 3. Hm, yes. Actually we also have change topic config there. But it is not related to such replication commands as increase replicas or change replica assignment. This will make CLI implementation a bit strange: if user specifies increase partitions and change topic config in one line - taking into account 2. we will have to create two separate alter topic requests, which were designed as batch requests :), but probably we can live with it. Okay, I will think about a separate error code to cover such cases. 4. We will need InvalidArgumentTopic (e.g. contains prohibited chars), IAPartitions, IAReplicas, IAReplicaAssignment, IATopicConfiguration. A server side implementation will be a little bit messy (like dozens if this then this error code) but maybe we should think about clients at the first place here. Thanks, Andrii Biletskyi On Fri, Apr 17, 2015 at 1:46 AM, Jun Rao j...@confluent.io wrote: 1. For the lags, we can add a new field lags per partition. It will return for each replica that's not in isr, the replica id and the lag in messages. Also, if TMR is sent to a non-leader, the response can just include an empty array for isr and lags. 2. So, we will just return a topic level error for the duplicated topics, right? 3. Yes, it's true that today, one can specify both partitions and replicaAssignment in the TopicCommand. However, partitions is actually ignored. So, it will be clearer if we don't allow users to do this. 4. How many specific error codes like InvalidPartitions and InvalidReplicas are needed? If it's not that many, giving out more specific error will be useful for non-java clients. Thanks, Jun On Wed, Apr 15, 2015 at 10:23 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Guys, Thanks for the discussion! Summary: 1. Q: How KAFKA-1367 (isr is inconsistent in brokers' metadata cache) can affect implementation? A: We can fix this issue for the leading broker - ReplicaManager (or Partition) component should have accurate isr list, then with leading broker having correct info, to do a describe-topic we will need to define leading brokers for partitions
Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)
Guys, Thank you for your time. A short summary of our discussion. Answering previous items: 1. 2. I will double check existing error codes to align the list of errors that needs to be added. 3. We agreed to think again about the batch requests semantics. The main concern is that users would expect we allow executing multiple instructions for one topic in one batch. I will start implementation and check whether there are any impediments to handle it this way. The same for AlterTopicRequest - I will try to make request semantics as easy as possible and allow users change different things at one time - e.g. change nr of partitions and replicas in one instruction. 4. We agreed not to add to TMR lag information. 5. We discussed preferred replica command and it was pointed out that generally users shouldn't call this command manually now since this is automatically handled by the cluster. If there are no objections (especially from devops people) I will remove respective request. 6. As discussed AdminClient API is a phase 2 and will go after Wire Protocol extensions. It will be finalized as java-doc after I complete patch for phase 1 - Wire Protocol + server-side code handling requests. Thanks, Andrii Biletskyi On Wed, Apr 22, 2015 at 12:36 AM, Jay Kreps jay.kr...@gmail.com wrote: Hey Andrii, thanks for all the hard work on this, it has come a long way. A couple questions and comments on this. For the errors, can we do the following: 1. Remove IllegalArgument from the name, we haven't used that convention for other errors. 2. Normalize this list with the existing errors. For example, elsewhere when you give an invalid topic name we give back an InvalidTopicException but this is proposing a new error for that. It would be good that these kinds of errors are handled the same way across all requests in the protocol. Other comments: 3. I don't understand MultipleInstructionsForOneTopic and MultipleTopicInstructionsInOneBatch and the description is quite vague. There is some implicit assumption in this proposal about how batching will be done that doesn't seem to be explained. 4. I think adding replica lag to the metadata request is out of place and should not be in the metadata request. Two reasons: a. This is something that can only be answered by the leader for that partition. So querying N partitions fundamentally mean querying N brokers (roughly). This is different from the other properties which are shared knowledge. b. This is a monitoring property not a configuration/metadata property. I recommend we remove this here and in the future add an API that gets all the monitoring stats from the server including lag. Adding all these to the metadata request won't make sense, right? 5. This includes a special request for preferred replica leader election. I feel that we should not expose an API for this because the user should not be in the business of managing leaders. We have gotten this feature to the point where preferred leadership election is enabled automatically. I think we should go further in that direction and do whatever work is required to make this the only option rather than trying to institute public apis for manually controlling it. 6. The API changes we discussed for the java api still aren't reflected in the proposal. -Jay On Tue, Apr 21, 2015 at 7:47 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Hi all, I've updated KIP-4 page to include all previously discussed items such as: new error codes, merged alter-topic and reassign-partitions requests, added TMR_V1. It'd be great if we concentrate on the Errors+Wire Protocol schema and discuss any remaining issues today, since first patch will include only server-side implementation. Thanks, Andrii Biletskyi On Tue, Apr 21, 2015 at 9:46 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: 1. Yes, this will be much easier. Okay, let's add it. 2, Okay. This will differ a little bit from the way currently kafka-topics.sh handles alter-topic command, but I think it's a reasonable restriction. I'll update KIP acordingly to our weekly call. Thanks, Andrii Biletskyi On Mon, Apr 20, 2015 at 10:56 PM, Jun Rao j...@confluent.io wrote: 1. Yes, lag is probably only going to be useful for the admin client. However, so is isr. It seems to me that we should get lag and isr from the same request. I was thinking that we can just extend TMR by changing replicas from an array of int to an array of (int, lag) pairs. Is that too complicated? 3. I was thinking that we just don't allow the cli to change more than one thing at a time. So, you will get an error if you want to change both partitions and configs. Thanks, Jun On Sun, Apr 19, 2015 at 8:22 AM, Andrii Biletskyi andrii.bilets...@stealth.ly wrote: Jun, 1. Yes, seems we can add lag info to the TMR. But before