> On Jan. 7, 2015, 3:01 a.m., Jun Rao wrote:
> > Thanks for the patch. It's a lot work! A few general comments below, in 
> > addition to the more detailed comments.
> > 
> > 1. Formatting: add a space after comma in function signature and function 
> > calls. So instead of 
> >     def foo(a: A,b: B)
> >   use
> >     def foo(a: A, b: B)
> >     
> > 2. In some of the files, imports can be optimized.
> > 3. Some new files are missing the license header.
> > 4. It seems that the client needs to know the security protocol in addition 
> > to the port. Otherwise, it doesn't know which protocol to use to establish 
> > the socket connection. So, perhaps the broker.list should now be 
> > protocol://host:port?
> 
> Gwen Shapira wrote:
>     for #4 - we currently have security.protocol parameter for producer and 
> consumer - reason being that:
>     1. broker.list may specify multiple brokers, but I think we want to keep 
> communication to one protocol per client. We can validate a single protocol 
> in the config, but it is still a bit wierd - asking users to type protocol 
> multiple times when it has to be the same.
>     2. Zookeeper consumer currently doesn't have broker.list, they have 
> zookeeper list, but it still needs to know the protocol

That sounds good. That change is not included in this patch?


> On Jan. 7, 2015, 3:01 a.m., Jun Rao wrote:
> > core/src/main/scala/kafka/cluster/Broker.scala, lines 36-52
> > <https://reviews.apache.org/r/28769/diff/11/?file=807930#file807930line36>
> >
> >     It would be good to document the format for both v1 and v2.
> >     
> >     Also, would it be cleaner if in v2, we remove the host/port fields and 
> > only include the endpoints field? Currently, v2 writes host/port, but 
> > doesn't use it itself. So, it's really just for backward compatibility. 
> > Since we have the "use.new.wire.protocol" config, we can let each broker 
> > register in ZK using v2 format only after that config is set to true (when 
> > all brokers are upgraded to the new binary).
> 
> Gwen Shapira wrote:
>     I think it makes sense to leave V1 as "without endpoints" and make V2 
> "only endpoints, no host/port fields". 
>     
>     We can use use.new.wire.protocol to choose which we serialize. This means 
> that until we set new.protocol to true, non-plaintext endpoints will exist in 
> config files (and therefore brokers will listen there), but will not exist in 
> ZK. 
>     
>     This is in line with our current understanding that until upgrade is 
> finished (and use.new.protocol is set to "true"), non-plaintext endpoints 
> will not be supported or used.
>     
>     A less expected side-effect:
>     At the time we switch from V1 to V2, 
> createEphemeralPathExpectConflictHandleZKBug will not recognize the existing 
> broker (if ephemeral node is still around) as identical to the new 
> registration (since the new broker will potentially have more endpoints). 
> This means that if the ephemeral node is still around when we switch, we'll 
> get "broker already registered" exception, instead of looping around until 
> the ephemeral node goes away. 
>     
>     I think we are fine with the behavior here, but I wanted to make it 
> explicit.

Yes, that typically only happens when ZK session expires, which is rare.


> On Jan. 7, 2015, 3:01 a.m., Jun Rao wrote:
> > core/src/main/scala/kafka/cluster/EndPoint.scala, line 36
> > <https://reviews.apache.org/r/28769/diff/11/?file=807932#file807932line36>
> >
> >     Do we need the dot after -?
> 
> Gwen Shapira wrote:
>     Yes. Because of the brackets its an actual dot (doesn't match "any") and 
> its needed because there's an IP in there, we need to match 1.1.1.1

make sense


> On Jan. 7, 2015, 3:01 a.m., Jun Rao wrote:
> > core/src/main/scala/kafka/network/SocketServer.scala, line 105
> > <https://reviews.apache.org/r/28769/diff/11/?file=807946#file807946line105>
> >
> >     I think each acceptor needs its own set of processors. We probably need 
> > to change the doc for num.network.threads to indicate that it's per 
> > endpoint.
> 
> Gwen Shapira wrote:
>     I'm not sure I understand why. Is it because you expect different 
> processor implementations later?

You are right. I was thinking that we will implement different types of socket 
support at the processor level, but that's actually at the socket channel level.


- Jun


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review66944
-----------------------------------------------------------


On Jan. 6, 2015, 7:46 p.m., Gwen Shapira wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28769/
> -----------------------------------------------------------
> 
> (Updated Jan. 6, 2015, 7:46 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1809
>     https://issues.apache.org/jira/browse/KAFKA-1809
> 
> 
> Repository: kafka
> 
> 
> Description
> -------
> 
> first commit of refactoring.
> 
> 
> changed topicmetadata to include brokerendpoints and fixed few unit tests
> 
> 
> fixing systest and support for binding to default address
> 
> 
> fixed system tests
> 
> 
> fix default address binding and ipv6 support
> 
> 
> fix some issues regarding endpoint parsing. Also, larger segments for systest 
> make the validation much faster
> 
> 
> added link to security wiki in doc
> 
> 
> fixing unit test after rename of ProtocolType to SecurityProtocol
> 
> 
> Following Joe's advice, added security protocol enum on client side, and 
> modified protocol to use ID instead of string.
> 
> 
> validate producer config against enum
> 
> 
> add a second protocol for testing and modify SocketServerTests to check on 
> multi-ports
> 
> 
> Reverted the metadata request changes and removed the explicit security 
> protocol argument. Instead the socketserver will determine the protocol based 
> on the port and add this to the request
> 
> 
> bump version for UpdateMetadataRequest and added support for rolling upgrades 
> with new config
> 
> 
> following tests - fixed LeaderAndISR protocol and ZK registration for 
> backward compatibility
> 
> 
> cleaned up some changes that were actually not necessary. hopefully making 
> this patch slightly easier to review
> 
> 
> undoing some changes that don't belong here
> 
> 
> bring back config lost in cleanup
> 
> 
> fixes neccessary for an all non-plaintext cluster to work
> 
> 
> Diffs
> -----
> 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> 525b95e98010cd2053eacd8c321d079bcac2f910 
>   
> clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
> 527dd0f9c47fce7310b7a37a9b95bf87f1b9c292 
>   clients/src/test/java/org/apache/kafka/common/utils/ClientUtilsTest.java 
> 6e37ea553f73d9c584641c48c56dbf6e62ba5f88 
>   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
> a39fab532f73148316a56c0f8e9197f38ea66f79 
>   config/server.properties b0e4496a8ca736b6abe965a430e8ce87b0e8287f 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> 28b12c7b89a56c113b665fbde1b95f873f8624a3 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> 285c0333ff43543d3e46444c1cd9374bb883bb59 
>   core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
> 84f60178f6ebae735c8aa3e79ed93fe21ac4aea7 
>   core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
> 4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340 
>   core/src/main/scala/kafka/api/TopicMetadata.scala 
> 0190076df0adf906ecd332284f222ff974b315fc 
>   core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
> 92ac4e687be22e4800199c0666bfac5e0059e5bb 
>   core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
> 530982e36b17934b8cc5fb668075a5342e142c59 
>   core/src/main/scala/kafka/client/ClientUtils.scala 
> ebba87f0566684c796c26cb76c64b4640a5ccfde 
>   core/src/main/scala/kafka/cluster/Broker.scala 
> 0060add008bb3bc4b0092f2173c469fce0120be6 
>   core/src/main/scala/kafka/cluster/BrokerEndPoint.scala PRE-CREATION 
>   core/src/main/scala/kafka/cluster/EndPoint.scala PRE-CREATION 
>   core/src/main/scala/kafka/cluster/SecurityProtocol.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/BrokerEndPointNotAvailableException.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
> 9ebbee6c16dc83767297c729d2d74ebbd063a993 
>   core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
> b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
>   core/src/main/scala/kafka/consumer/ConsumerFetcherThread.scala 
> ee6139c901082358382c5ef892281386bf6fc91b 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> 191a8677444e53b043e9ad6e94c5a9191c32599e 
>   core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
> eb492f00449744bc8d63f55b393e2a1659d38454 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> 66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
>   core/src/main/scala/kafka/javaapi/ConsumerMetadataResponse.scala 
> 1b28861cdf7dfb30fc696b54f8f8f05f730f4ece 
>   core/src/main/scala/kafka/javaapi/TopicMetadata.scala 
> f384e04678df10a5b46a439f475c63371bf8e32b 
>   core/src/main/scala/kafka/javaapi/TopicMetadataRequest.scala 
> b0b7be14d494ae8c87f4443b52db69d273c20316 
>   core/src/main/scala/kafka/network/BlockingChannel.scala 
> 6e2a38eee8e568f9032f95c75fa5899e9715b433 
>   core/src/main/scala/kafka/network/RequestChannel.scala 
> 7b1db3dbbb2c0676f166890f566c14aa248467ab 
>   core/src/main/scala/kafka/network/SocketServer.scala 
> e451592fe358158548117f47a80e807007dd8b98 
>   core/src/main/scala/kafka/producer/ProducerPool.scala 
> 43df70bb461dd3e385e6b20396adef3c4016a3fc 
>   core/src/main/scala/kafka/server/AbstractFetcherManager.scala 
> 20c00cb8cc2351950edbc8cb1752905a0c26e79f 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> 8c281d4668f92eff95a4a5df3c03c4b5b20e7095 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 2a1c0326b6e6966d8b8254bd6a1cb83ad98a3b80 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 6e26c5436feb4629d17f199011f3ebb674aa767f 
>   core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
> 4acdd70fe9c1ee78d6510741006c2ece65450671 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 1bf7d10cef23a77e716666eb16bf6d0e68bc4ebe 
>   core/src/main/scala/kafka/server/MetadataCache.scala 
> bf81a1ab88c14be8697b441eedbeb28fa0112643 
>   core/src/main/scala/kafka/server/ReplicaFetcherManager.scala 
> 351dbbad3bdb709937943b336a5b0a9e0162a5e2 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> 6879e730282185bda3d6bc3659cb15af0672cecf 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> e58fbb922e93b0c31dff04f187fcadb4ece986d7 
>   core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala 
> d1e7c434e77859d746b8dc68dd5d5a3740425e79 
>   core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala 
> ba6ddd7a909df79a0f7d45e8b4a2af94ea0fceb6 
>   core/src/main/scala/kafka/tools/SimpleConsumerShell.scala 
> b4f903b6c7c3bb725cac7c05eb1f885906413c4d 
>   core/src/main/scala/kafka/tools/UpdateOffsetsInZK.scala 
> 111c9a8b94ce45d95551482e9fd3f8c1cccbf548 
>   core/src/main/scala/kafka/utils/Utils.scala 
> 738c1af9ef5de16fdf5130daab69757a14c48b5c 
>   core/src/main/scala/kafka/utils/ZkUtils.scala 
> 56e3e88e0cc6d917b0ffd1254e173295c1c4aabd 
>   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
> 5ec613cdb50b93bfe4477e89554dfc3768759b18 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> 6196060edf9f1650720ec916f88933953a1daa2c 
>   core/src/test/scala/other/kafka/TestOffsetManager.scala 
> 41f334d48897b3027ed54c58bbf4811487d3b191 
>   core/src/test/scala/unit/kafka/KafkaConfigTest.scala 
> 4d36b8b1173f60d43463c13c9d8c1275a84c8c28 
>   core/src/test/scala/unit/kafka/admin/AddPartitionsTest.scala 
> 1bf2667f47853585bc33ffb3e28256ec5f24ae84 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> cd16ced5465d098be7a60498326b2a98c248f343 
>   core/src/test/scala/unit/kafka/cluster/BrokerTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala 
> c0355cc0135c6af2e346b4715659353a31723b86 
>   core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
> 25845abbcad2e79f56f729e59239b738d3ddbc9d 
>   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
> 35dc071b1056e775326981573c9618d8046e601d 
>   core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
> 5f4d85254c384dcc27a5a84f0836ea225d3a901a 
>   core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala 
> 1db6ac329f7b54e600802c8a623f80d159d4e69b 
>   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
> d60d8e0f49443f4dc8bc2cad6e2f951eda28f5cb 
>   core/src/test/scala/unit/kafka/server/AdvertiseBrokerTest.scala 
> f0c4a56b61b4f081cf4bee799c6e9c523ff45e19 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 2377abe4933e065d037828a214c3a87e1773a8ef 
>   core/src/test/scala/unit/kafka/server/LeaderElectionTest.scala 
> c2ba07c5fdbaf0e65ca033b2e4d88f45a8a15b2e 
>   core/src/test/scala/unit/kafka/server/LogOffsetTest.scala 
> c06ee756bf0fe07e5d3c92823a476c960b37afd6 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 94d0028d8c4907e747aa8a74a13d719b974c97bf 
>   system_test/replication_testsuite/testcase_1/testcase_1_properties.json 
> 0c6d7a316cc6b51ac0755ca03558507db0706c31 
>   system_test/utils/kafka_system_test_utils.py 
> 41d511cbc310fa87e0f2cd2f772e479e8e3ae4e2 
> 
> Diff: https://reviews.apache.org/r/28769/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Gwen Shapira
> 
>

Reply via email to