Re: [ANNOUNCE] New committer: Grant Henke

2017-01-13 Thread Jeff Holoman
Well done Grant!  Congrats!

On Thu, Jan 12, 2017 at 1:13 PM, Joel Koshy  wrote:

> Hey Grant - congrats!
>
> On Thu, Jan 12, 2017 at 10:00 AM, Neha Narkhede  wrote:
>
> > Congratulations, Grant. Well deserved!
> >
> > On Thu, Jan 12, 2017 at 7:51 AM Grant Henke  wrote:
> >
> > > Thanks everyone!
> > >
> > > On Thu, Jan 12, 2017 at 2:58 AM, Damian Guy 
> > wrote:
> > >
> > > > Congratulations!
> > > >
> > > > On Thu, 12 Jan 2017 at 03:35 Jun Rao  wrote:
> > > >
> > > > > Grant,
> > > > >
> > > > > Thanks for all your contribution! Congratulations!
> > > > >
> > > > > Jun
> > > > >
> > > > > On Wed, Jan 11, 2017 at 2:51 PM, Gwen Shapira 
> > > wrote:
> > > > >
> > > > > > The PMC for Apache Kafka has invited Grant Henke to join as a
> > > > > > committer and we are pleased to announce that he has accepted!
> > > > > >
> > > > > > Grant contributed 88 patches, 90 code reviews, countless great
> > > > > > comments on discussions, a much-needed cleanup to our protocol
> and
> > > the
> > > > > > on-going and critical work on the Admin protocol. Throughout
> this,
> > he
> > > > > > displayed great technical judgment, high-quality work and
> > willingness
> > > > > > to contribute where needed to make Apache Kafka awesome.
> > > > > >
> > > > > > Thank you for your contributions, Grant :)
> > > > > >
> > > > > > --
> > > > > > Gwen Shapira
> > > > > > Product Manager | Confluent
> > > > > > 650.450.2760 <(650)%20450-2760> <(650)%20450-2760> | @gwenshap
> > > > > > Follow us: Twitter | blog
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Grant Henke
> > > Software Engineer | Cloudera
> > > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> > >
> > --
> > Thanks,
> > Neha
> >
>


[jira] [Updated] (KAFKA-2757) Consolidate BrokerEndPoint and EndPoint

2016-01-06 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-2757:

Assignee: (was: Jeff Holoman)

> Consolidate BrokerEndPoint and EndPoint
> ---
>
> Key: KAFKA-2757
> URL: https://issues.apache.org/jira/browse/KAFKA-2757
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
> Fix For: 0.9.0.1
>
>
> For code simplicity, it's better to consolidate these two classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Kafka Committer Ewen Cheslack-Postava

2015-12-08 Thread Jeff Holoman
Well done Ewen. Congrats.

On Tue, Dec 8, 2015 at 3:18 PM, Harsha <ka...@harsha.io> wrote:

> Congrats Ewen.
> -Harsha
>
> On Tue, Dec 8, 2015, at 12:08 PM, Ashish Singh wrote:
> > Congrats Ewen!
> >
> > On Tuesday, December 8, 2015, Joe Stein <joe.st...@stealth.ly> wrote:
> >
> > > Ewen,
> > >
> > > Congrats!
> > >
> > > ~ Joestein
> > >
> > > On Tue, Dec 8, 2015 at 2:51 PM, Guozhang Wang <wangg...@gmail.com
> > > <javascript:;>> wrote:
> > >
> > > > Congrats Ewen! Welcome onboard.
> > > >
> > > > Guozhang
> > > >
> > > > On Tue, Dec 8, 2015 at 11:42 AM, Liquan Pei <liquan...@gmail.com
> > > <javascript:;>> wrote:
> > > >
> > > > > Congrats, Ewen!
> > > > >
> > > > > On Tue, Dec 8, 2015 at 11:37 AM, Neha Narkhede <n...@confluent.io
> > > <javascript:;>>
> > > > wrote:
> > > > >
> > > > > > I am pleased to announce that the Apache Kafka PMC has voted to
> > > > > > invite Ewen Cheslack-Postava as a committer and Ewen has
> accepted.
> > > > > >
> > > > > > Ewen is an active member of the community and has contributed and
> > > > > reviewed
> > > > > > numerous patches to Kafka. His most significant contribution is
> Kafka
> > > > > > Connect which was released few days ago as part of 0.9.
> > > > > >
> > > > > > Please join me on welcoming and congratulating Ewen.
> > > > > >
> > > > > > Ewen, we look forward to your continued contributions to the
> Kafka
> > > > > > community!
> > > > > >
> > > > > > --
> > > > > > Thanks,
> > > > > > Neha
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Liquan Pei
> > > > > Department of Physics
> > > > > University of Massachusetts Amherst
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> > --
> > Ashish h
>



-- 
Jeff Holoman
Systems Engineer


Re: New and updated producers and consumers

2015-11-05 Thread Jeff Holoman
Prabhjot,

The answer changes slightly for the Producer and Consumer and depends on
your timeline and comfort with using new APIs

Today and in the future, for the Producer, you should be using the "new"
producer, which isn't all that new anymore:
org.apache.kafka.clients.producer.KafkaProducer;


Today with 0.9 yet to be released you'd likely want to use the High-Level
Consumer. This is covered in the official docs here:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example and
in this blog post
http://ingest.tips/2014/10/12/kafka-high-level-consumer-frequently-missing-pieces/
along
with most of the other examples that you'll find.

After .9 is released, I'd encourage you to take a look at the new Consumer
API. This has a lot of advantages in terms of offset management and will be
the only consumer client that fully supports security features like SSL
that are slated to be released into the platform.

Your choice of development language is entirely up to you. Note that the
only version of clients that will be maintained in the project going
forward are being implemented in Java, so Scala or Java shouldn't matter
too much for you.

Hope this helps

Jeff


On Thu, Nov 5, 2015 at 12:14 PM, Prabhjot Bharaj <prabhbha...@gmail.com>
wrote:

> Hello Folks,
>
> Requesting your expertise on this.
> I see that under core/src/main/scala/kafka/producer/, there are many
> implementations - Producer.scala and SyncProducer.scala
>
> Also, going via the producerPerformance.scala, there are 2 implementations
> - NewShinyProducer (which points to KafkaProducer.java) and the OldProducer
>
> Similar might be the case with Consumers, but I have not seen that yet.
>
> Please let me know which producer and consumer is supposed to be used and
> which ones will be phased out in future releases, so I can focus on only 1
> type of producer and consumer (high level as well as simple)
>
> Thanks,
> Prabhjot
>
> Thanks,
> Prabhjot
>
> On Thu, Nov 5, 2015 at 3:55 PM, Prabhjot Bharaj <prabhbha...@gmail.com>
> wrote:
>
> > Adding users as well
> >
> > On Thu, Nov 5, 2015 at 3:37 PM, Prabhjot Bharaj <prabhbha...@gmail.com>
> > wrote:
> >
> >> Hi,
> >>
> >> I'm using the latest update: 0.8.2.2
> >> I would like to use the latest producer and consumer apis
> >> over the past few weeks, I have tried to do some performance
> benchmarking
> >> using the producer and consumer scripts provided in the bin directory.
> It
> >> was a fun activity and I have learnt a lot about kafka.
> >>
> >> But, I have also experienced that sometimes the implementation of the
> >> performance scripts was not up-to-date or some items were different than
> >> the documentation
> >>
> >> Now, I would like to develop my application with kafka. I'm comfortable
> >> using scala/java
> >>
> >> Please let me know which producer and consumer (both high level and
> >> simple) class/object should I be using
> >>
> >> Thanks a lot,
> >> Prabhjot
> >>
> >
> >
> >
> > --
> > -----
> > "There are only 10 types of people in the world: Those who understand
> > binary, and those who don't"
> >
>
>
>
> --
> -
> "There are only 10 types of people in the world: Those who understand
> binary, and those who don't"
>



-- 
Jeff Holoman
Systems Engineer


Re: New and updated producers and consumers

2015-11-05 Thread Jeff Holoman
The best thing that I know is the latest javadoc that's committed to trunk:

https://github.com/apache/kafka/blob/ef5d168cc8f10ad4f0efe9df4cbe849a4b35496e/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java

Thanks

Jeff



On Thu, Nov 5, 2015 at 12:51 PM, Cliff Rhyne <crh...@signal.co> wrote:

> Hi Jeff,
>
> Is there a writeup of how to use the new consumer API (either in general or
> for Java)?  I've seen various proposals but I don't see a recent one on the
> actual implementation.  My team wants to start the development work to
> migrate to 0.9.
>
> Thanks,
> Cliff
>
> On Thu, Nov 5, 2015 at 11:18 AM, Jeff Holoman <jholo...@cloudera.com>
> wrote:
>
> > Prabhjot,
> >
> > The answer changes slightly for the Producer and Consumer and depends on
> > your timeline and comfort with using new APIs
> >
> > Today and in the future, for the Producer, you should be using the "new"
> > producer, which isn't all that new anymore:
> > org.apache.kafka.clients.producer.KafkaProducer;
> >
> >
> > Today with 0.9 yet to be released you'd likely want to use the High-Level
> > Consumer. This is covered in the official docs here:
> > https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
> > and
> > in this blog post
> >
> >
> http://ingest.tips/2014/10/12/kafka-high-level-consumer-frequently-missing-pieces/
> > along
> > with most of the other examples that you'll find.
> >
> > After .9 is released, I'd encourage you to take a look at the new
> Consumer
> > API. This has a lot of advantages in terms of offset management and will
> be
> > the only consumer client that fully supports security features like SSL
> > that are slated to be released into the platform.
> >
> > Your choice of development language is entirely up to you. Note that the
> > only version of clients that will be maintained in the project going
> > forward are being implemented in Java, so Scala or Java shouldn't matter
> > too much for you.
> >
> > Hope this helps
> >
> > Jeff
> >
> >
> > On Thu, Nov 5, 2015 at 12:14 PM, Prabhjot Bharaj <prabhbha...@gmail.com>
> > wrote:
> >
> > > Hello Folks,
> > >
> > > Requesting your expertise on this.
> > > I see that under core/src/main/scala/kafka/producer/, there are many
> > > implementations - Producer.scala and SyncProducer.scala
> > >
> > > Also, going via the producerPerformance.scala, there are 2
> > implementations
> > > - NewShinyProducer (which points to KafkaProducer.java) and the
> > OldProducer
> > >
> > > Similar might be the case with Consumers, but I have not seen that yet.
> > >
> > > Please let me know which producer and consumer is supposed to be used
> and
> > > which ones will be phased out in future releases, so I can focus on
> only
> > 1
> > > type of producer and consumer (high level as well as simple)
> > >
> > > Thanks,
> > > Prabhjot
> > >
> > > Thanks,
> > > Prabhjot
> > >
> > > On Thu, Nov 5, 2015 at 3:55 PM, Prabhjot Bharaj <prabhbha...@gmail.com
> >
> > > wrote:
> > >
> > > > Adding users as well
> > > >
> > > > On Thu, Nov 5, 2015 at 3:37 PM, Prabhjot Bharaj <
> prabhbha...@gmail.com
> > >
> > > > wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> I'm using the latest update: 0.8.2.2
> > > >> I would like to use the latest producer and consumer apis
> > > >> over the past few weeks, I have tried to do some performance
> > > benchmarking
> > > >> using the producer and consumer scripts provided in the bin
> directory.
> > > It
> > > >> was a fun activity and I have learnt a lot about kafka.
> > > >>
> > > >> But, I have also experienced that sometimes the implementation of
> the
> > > >> performance scripts was not up-to-date or some items were different
> > than
> > > >> the documentation
> > > >>
> > > >> Now, I would like to develop my application with kafka. I'm
> > comfortable
> > > >> using scala/java
> > > >>
> > > >> Please let me know which producer and consumer (both high level and
> > > >> simple) class/object should I be using
> > > >>
> > > >> Thanks a lot,
> > > >> Prabhjot
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > -
> > > > "There are only 10 types of people in the world: Those who understand
> > > > binary, and those who don't"
> > > >
> > >
> > >
> > >
> > > --
> > > -
> > > "There are only 10 types of people in the world: Those who understand
> > > binary, and those who don't"
> > >
> >
> >
> >
> > --
> > Jeff Holoman
> > Systems Engineer
> >
>
>
>
> --
> Cliff Rhyne
> Software Engineering Lead
> e: crh...@signal.co
> signal.co
> 
>
> Cut Through the Noise
>
> This e-mail and any files transmitted with it are for the sole use of the
> intended recipient(s) and may contain confidential and privileged
> information. Any unauthorized use of this email is strictly prohibited.
> ©2015 Signal. All rights reserved.
>



-- 
Jeff Holoman
Systems Engineer


[jira] [Created] (KAFKA-2735) BrokerEndPoint should support non-lowercase hostnames

2015-11-03 Thread Jeff Holoman (JIRA)
Jeff Holoman created KAFKA-2735:
---

 Summary: BrokerEndPoint should support non-lowercase hostnames
 Key: KAFKA-2735
 URL: https://issues.apache.org/jira/browse/KAFKA-2735
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Jeff Holoman
Assignee: Jeff Holoman


BrokerEndPoint uses a regex to parse the host:port and fails if the hostname 
contains uppercase characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2735) BrokerEndPoint should support uppercase hostnames

2015-11-03 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-2735:

Summary: BrokerEndPoint should support uppercase hostnames  (was: 
BrokerEndPoint should support non-lowercase hostnames)

> BrokerEndPoint should support uppercase hostnames
> -
>
> Key: KAFKA-2735
> URL: https://issues.apache.org/jira/browse/KAFKA-2735
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>    Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>
> BrokerEndPoint uses a regex to parse the host:port and fails if the hostname 
> contains uppercase characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2549) Checkstyle reporting failure in trunk due to unused imports in Selector.java

2015-09-14 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744425#comment-14744425
 ] 

Jeff Holoman commented on KAFKA-2549:
-

Yeah sorry about that, I inadvertently cmd-z'd the import deletion. Thanks for 
fixing. 

> Checkstyle reporting failure in trunk due to unused imports in Selector.java
> 
>
> Key: KAFKA-2549
> URL: https://issues.apache.org/jira/browse/KAFKA-2549
> Project: Kafka
>  Issue Type: Bug
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>Priority: Blocker
>
> Introduced in 
> https://github.com/apache/kafka/commit/d02ca36ca1cccdb6962191b97f54ce96b9d75abc#diff-db8f8be6ef2f1c81515d1ed83b3ab107
>  in which the Selector.java was modified with some unused imports so the 
> trunk can not execute test targets as it fails in client section during 
> checkstyle stage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2504) Stop logging WARN when client disconnects

2015-09-13 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-2504:

Status: Patch Available  (was: Open)

> Stop logging WARN when client disconnects
> -
>
> Key: KAFKA-2504
> URL: https://issues.apache.org/jira/browse/KAFKA-2504
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>    Assignee: Jeff Holoman
> Fix For: 0.9.0.0
>
>
> I thought we fixed this one, but it came back. This can be fill logs and is 
> fairly useless. Should be logged at DEBUG level:
> {code}
> [2015-09-02 12:05:59,743] WARN Error in I/O with connection to /10.191.0.36 
> (org.apache.kafka.common.network.Selector)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
>   at 
> org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:111)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
>   at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:296)
>   at kafka.network.Processor.run(SocketServer.scala:405)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2504) Stop logging WARN when client disconnects

2015-09-12 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman reassigned KAFKA-2504:
---

Assignee: Jeff Holoman

> Stop logging WARN when client disconnects
> -
>
> Key: KAFKA-2504
> URL: https://issues.apache.org/jira/browse/KAFKA-2504
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>    Assignee: Jeff Holoman
> Fix For: 0.9.0.0
>
>
> I thought we fixed this one, but it came back. This can be fill logs and is 
> fairly useless. Should be logged at DEBUG level:
> {code}
> [2015-09-02 12:05:59,743] WARN Error in I/O with connection to /10.191.0.36 
> (org.apache.kafka.common.network.Selector)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
>   at 
> org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:111)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
>   at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:296)
>   at kafka.network.Processor.run(SocketServer.scala:405)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-09-04 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730833#comment-14730833
 ] 

Jeff Holoman commented on KAFKA-1929:
-

Jason, I'm not working on this at this time. It was previously picked up by 
[~granthenke] but I don't think he's working on it currently. 




> Convert core kafka module to use the errors in org.apache.kafka.common.errors
> -
>
> Key: KAFKA-1929
> URL: https://issues.apache.org/jira/browse/KAFKA-1929
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Grant Henke
> Attachments: KAFKA-1929.patch
>
>
> With the introduction of the common package there are now a lot of errors 
> duplicated in both the common package and in the server. We should refactor 
> the server code (but not the scala clients) to switch over to the exceptions 
> in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-16 Thread Jeff Holoman
+1 for the release and also including

https://issues.apache.org/jira/browse/KAFKA-2114

Thanks

Jeff

On Sun, Aug 16, 2015 at 2:51 PM, Stevo Slavić ssla...@gmail.com wrote:

 +1 (non-binding) for 0.8.2.2 release

 Would be nice to include in that release new producer resiliency bug fixes
 https://issues.apache.org/jira/browse/KAFKA-1788 and
 https://issues.apache.org/jira/browse/KAFKA-2120

 On Fri, Aug 14, 2015 at 4:03 PM, Gwen Shapira g...@confluent.io wrote:

  Will be nice to include Kafka-2308 and fix two critical snappy issues in
  the maintenance release.
 
  Gwen
  On Aug 14, 2015 6:16 AM, Grant Henke ghe...@cloudera.com wrote:
 
   Just to clarify. Will KAFKA-2189 be the only patch in the release?
  
   On Fri, Aug 14, 2015 at 7:35 AM, Manikumar Reddy ku...@nmsworks.co.in
 
   wrote:
  
+1  for 0.8.2.2 release
   
On Fri, Aug 14, 2015 at 5:49 PM, Ismael Juma ism...@juma.me.uk
  wrote:
   
 I think this is a good idea as the change is minimal on our side
 and
  it
has
 been tested in production for some time by the reporter.

 Best,
 Ismael

 On Fri, Aug 14, 2015 at 1:15 PM, Jun Rao j...@confluent.io wrote:

  Hi, Everyone,
 
  Since the release of Kafka 0.8.2.1, a number of people have
  reported
   an
  issue with snappy compression (
  https://issues.apache.org/jira/browse/KAFKA-2189). Basically, if
   they
 use
  snappy in 0.8.2.1, they will experience a 2-3X space increase.
 The
issue
  has since been fixed in trunk (just a snappy jar upgrade). Since
   0.8.3
is
  still a few months away, it may make sense to do an 0.8.2.2
 release
just
 to
  fix this issue. Any objections?
 
  Thanks,
 
  Jun
 

   
  
  
  
   --
   Grant Henke
   Software Engineer | Cloudera
   gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
  
 




-- 
Jeff Holoman
Systems Engineer


[jira] [Updated] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-08-10 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1929:

Assignee: Grant Henke  (was: Jeff Holoman)

 Convert core kafka module to use the errors in org.apache.kafka.common.errors
 -

 Key: KAFKA-1929
 URL: https://issues.apache.org/jira/browse/KAFKA-1929
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Grant Henke
 Attachments: KAFKA-1929.patch


 With the introduction of the common package there are now a lot of errors 
 duplicated in both the common package and in the server. We should refactor 
 the server code (but not the scala clients) to switch over to the exceptions 
 in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2064) Replace ConsumerMetadataRequest and Response with org.apache.kafka.common.requests objects

2015-07-04 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-2064:

Assignee: (was: Jeff Holoman)

 Replace ConsumerMetadataRequest and Response with  
 org.apache.kafka.common.requests objects
 ---

 Key: KAFKA-2064
 URL: https://issues.apache.org/jira/browse/KAFKA-2064
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
 Fix For: 0.8.3


 Replace ConsumerMetadataRequest and response with  
 org.apache.kafka.common.requests objects



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-11- Authorization design for kafka security

2015-04-22 Thread Jeff Holoman
 PM, Jay Kreps
 jay.kr...@gmail.commailto:jay.kr...@gmail.com wrote:
  Following up on the KIP discussion. Two options for authorizing
 consumers
  to read topic t as part of group g:
  1. READ permission on resource /topic/t
  2. READ permission on resource /topic/t AND WRITE permission on
 /group/g
 
  The advantage of (1) is that it is simpler. The disadvantage is that
 any
  member of any group that reads from t can commit offsets as any other
  member of a different group. This doesn't effect data security (who
 can
  access what) but it is a bit of a management issue--a malicious person
 can
  cause data loss or duplicates for another consumer by committing
 offset.
 
  I think I favor (2) but it's worth it to think it through.
 
  -Jay
 
  On Tue, Apr 21, 2015 at 2:43 PM, Parth Brahmbhatt 
  pbrahmbh...@hortonworks.commailto:pbrahmbh...@hortonworks.com
 wrote:
 
  Hey Jun,
 
  Yes and we support wild cards for all acl entities principal, hosts
 and
  operation.
 
  Thanks
  Parth
 
  On 4/21/15, 9:06 AM, Jun Rao
 j...@confluent.iomailto:j...@confluent.io wrote:
 
  Harsha, Parth,
  
  Thanks for the clarification. This makes sense. Perhaps we can
 clarify the
  meaning of those rules in the wiki.
  
  Related to this, it seems that we need to support wildcard in
 cli/request
  protocol for topics?
  
  Jun
  
  On Mon, Apr 20, 2015 at 9:07 PM, Parth Brahmbhatt 
  pbrahmbh...@hortonworks.commailto:pbrahmbh...@hortonworks.com
 wrote:
  
   The iptables on unix supports the DENY operator, not that it
 should
   matter. The deny operator can also be used to specify ³allow user1
 to
  READ
   from topic1 from all hosts but host1,host2². Again we could add a
 host
   group semantic and extra complexity around that, not sure if its
 worth
  it.
   In addition with DENY operator you are now not forced to create a
  special
   group just to support the authorization use case. I am not
 convinced
  that
   the operator it self is really all that confusing. There are 3
 practical
   use cases:
   - Resource with no acl what so ever - allow access to everyone (
 just
  for
   backward compatibility, I would much rather fail close and force
 users
  to
   explicitly grant acls that allows access to all users.)
   - Resource with some acl attached - only users that have a
 matching
  allow
   acl are allowed (i.e. ³allow READ access to topic1 to user1 from
 all
   hosts², only user1 has READ access and no other user has access of
 any
   kind)
   - Resource with some allow and some deny acl attached - users are
  allowed
   to perform operation only when they satisfy allow acl and do not
 have
   conflicting deny acl. Users that have no acl(allow or deny) will
 still
  not
   have any access. (i.e. ³allow READ access to topic1 to user1 from
 all
   hosts except host1 and host², only user1 has access but not from
 host1
  an
   host2)
  
   I think we need to make a decision on deny primarily because with
   introduction of acl management API, Acl is now a public class that
 will
  be
   used by Ranger/Santry and other authroization providers. In
 Current
  design
   the acl has a permissionType enum field with possible values of
 Allow
  and
   Deny. If we chose to remove deny we can assume all acls to be of
 allow
   type and remove the permissionType field completely.
  
   Thanks
   Parth
  
   On 4/20/15, 6:12 PM, Gwen Shapira
 gshap...@cloudera.commailto:gshap...@cloudera.com wrote:
  
   I think thats how its done in pretty much any system I can think
 of.
   
  
  
 
 
 
 
 
 
 
 
 




-- 
Jeff Holoman
Systems Engineer


[jira] [Updated] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-04-21 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1810:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network, security
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch, 
 KAFKA-1810_2015-03-15_01:13:12.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31590: Patch for KAFKA-1990

2015-04-19 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31590/
---

(Updated April 19, 2015, 5:41 p.m.)


Review request for kafka and Gwen Shapira.


Bugs: KAFKA-1990
https://issues.apache.org/jira/browse/KAFKA-1990


Repository: kafka


Description
---

KAFKA-1990


Diffs (updated)
-

  core/src/main/scala/kafka/log/LogConfig.scala 
558c703f26da22b1a938bbbf8a6c4409a8e107fb 
  core/src/main/scala/kafka/log/LogManager.scala 
a7a9b85ba1b80c2b32afe6edbbc175f08238809c 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
69b772c1941865fbe15b34bb2784c511f8ce519a 
  core/src/test/scala/kafka/log/LogConfigTest.scala 
9690f141be75202973085025444b52208ebd5ab2 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
cfe38df577e3f179ebecad3f45429a15aa69e7b4 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
40c265aabae8e09b1d0958802ef7e2fc97689d11 

Diff: https://reviews.apache.org/r/31590/diff/


Testing
---

Updated for ConfigDef. 

Open question. Is there any reason that LogConfigTest is located in kafka/log 
rather than unit/kafka/log ?


Thanks,

Jeff Holoman



[jira] [Updated] (KAFKA-1990) Add unlimited time-based log retention

2015-04-19 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1990:

Attachment: KAFKA-1990_2015-04-19_13:41:23.patch

 Add unlimited time-based log retention
 --

 Key: KAFKA-1990
 URL: https://issues.apache.org/jira/browse/KAFKA-1990
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
  Labels: newbie
 Attachments: KAFKA-1990_2015-03-10_00:55:11.patch, 
 KAFKA-1990_2015-04-19_13:41:23.patch


 Currently you can set
   log.retention.bytes = -1
 to disable size based retention (in fact that is the default). However, there 
 is no equivalent for time based retention. You can set time based retention 
 to something really big like 2147483647 hours, which in practical terms is 
 forever, but it is kind of silly to require this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1990) Add unlimited time-based log retention

2015-04-19 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502017#comment-14502017
 ] 

Jeff Holoman commented on KAFKA-1990:
-

Updated reviewboard https://reviews.apache.org/r/31590/diff/
 against branch origin/trunk

 Add unlimited time-based log retention
 --

 Key: KAFKA-1990
 URL: https://issues.apache.org/jira/browse/KAFKA-1990
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
  Labels: newbie
 Attachments: KAFKA-1990_2015-03-10_00:55:11.patch, 
 KAFKA-1990_2015-04-19_13:41:23.patch


 Currently you can set
   log.retention.bytes = -1
 to disable size based retention (in fact that is the default). However, there 
 is no equivalent for time based retention. You can set time based retention 
 to something really big like 2147483647 hours, which in practical terms is 
 forever, but it is kind of silly to require this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1990) Add unlimited time-based log retention

2015-04-19 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1990:

Status: Patch Available  (was: In Progress)

 Add unlimited time-based log retention
 --

 Key: KAFKA-1990
 URL: https://issues.apache.org/jira/browse/KAFKA-1990
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
  Labels: newbie
 Attachments: KAFKA-1990_2015-03-10_00:55:11.patch, 
 KAFKA-1990_2015-04-19_13:41:23.patch


 Currently you can set
   log.retention.bytes = -1
 to disable size based retention (in fact that is the default). However, there 
 is no equivalent for time based retention. You can set time based retention 
 to something really big like 2147483647 hours, which in practical terms is 
 forever, but it is kind of silly to require this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2064) Replace ConsumerMetadataRequest and Response with org.apache.kafka.common.requests objects

2015-04-01 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman reassigned KAFKA-2064:
---

Assignee: Jeff Holoman

 Replace ConsumerMetadataRequest and Response with  
 org.apache.kafka.common.requests objects
 ---

 Key: KAFKA-2064
 URL: https://issues.apache.org/jira/browse/KAFKA-2064
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: Jeff Holoman
 Fix For: 0.8.3


 Replace ConsumerMetadataRequest and response with  
 org.apache.kafka.common.requests objects



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-04-01 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14391298#comment-14391298
 ] 

Jeff Holoman commented on KAFKA-1929:
-

[~jkreps] Did you have a chance to review this?

Thanks

Jeff

 Convert core kafka module to use the errors in org.apache.kafka.common.errors
 -

 Key: KAFKA-1929
 URL: https://issues.apache.org/jira/browse/KAFKA-1929
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Attachments: KAFKA-1929.patch


 With the introduction of the common package there are now a lot of errors 
 duplicated in both the common package and in the server. We should refactor 
 the server code (but not the scala clients) to switch over to the exceptions 
 in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-7 Security - IP Filtering

2015-03-20 Thread Jeff Holoman
Parth,

I think it's important to understand the timing of both the initial JIRA
and the KIP, it helps put my comments in proper context.

The initial JIRA for this was created back in December, so the timeline for
1688/KIP-11 was pretty unclear. KIP-7 came out when we started doing KIPs,
back in Jan.  The initial comments I think pretty clearly acknowledged the
work on 1688.

My comment re: integration was that perhaps the check logic could be reused
(i.e, the CIDR range checks). That's what I meant when I mentioned the
intent. At that point there was no KIP-11 and no patch, so it was just a
hunch.

In terms of it being a different use case, I do think there are some
aspects which would be beneficial, even given the work on 1688.

1) Simplicity
2) Less config - 2 params
3) Allows for both whitelisting and blacklisting, giving a bit more
flexibility.

Another key driver, at least at the time, was timing. As this discussion
has been extended that becomes less of a motivator, which I understand.

I don't necessarily think that giving users options to limit access will
confuse them, given the new interface for security will likely take a bit
of understanding to implement correctly. In fact they might be quite
complimentary.

Lets take the 2nd example in KIP 11:

user2.hosts=*
user2.operation=read

So if I understand correctly, this would allow read access for a particular
topic from any host for a given user. it could be helpful to block a range
of IPs (like production or QA) but otherwise not specify every potential
host that needs to read from that topic.

As you mentioned adding additional constructs make the ACL's a bit more
complex, maybe this provides some relief there.

Jun, if folks feel like there's too much overlap, and this wouldn't be
useful, then that's fair. That's what the votes are for right? :)


Thanks

Jeff


On Fri, Mar 20, 2015 at 6:01 PM, Parth Brahmbhatt 
pbrahmbh...@hortonworks.com wrote:

 I am guessing in your last reply you meant KIP-11. And yes, I think KIP-11
 subsumed KIP-7 so if we can finish KIP-11 we should not need KIP=7 but I
 will let Jeff confirm that,

 Thanks
 Parth


 On 3/20/15, 2:32 PM, Jun Rao j...@confluent.io wrote:

 Right, if this KIP is subsumed by KIP-7, perhaps we just need to wait
 until
 KIP-7 is done? If we add the small change now, we will have to worry about
 migrating existing users and deprecating some configs when KIP-7 is done.
 
 Thanks,
 
 Jun
 
 On Fri, Mar 20, 2015 at 10:36 AM, Parth Brahmbhatt 
 pbrahmbh...@hortonworks.com wrote:
 
  I am not entirely sure what you mean by integrating KIP-7 work with
  KAFKA-1688. Wouldn¹t the work done as part of KIP-7 become obsolete once
  KAFKA-1688 is done? Multiple ways of controlling these authorization
 just
  seems extra configuration that will confuse admins/users.
 
  If timing is the only issue don¹t you think its better to focus our
 energy
  on getting 1688 done faster which seem to be the longer term goal
 anyways?
 
  Thanks
  Parth
 
  On 3/20/15, 10:28 AM, Jeff Holoman jholo...@cloudera.com wrote:
 
  Hey Jun,
  
  The intent was for the same functionality to be utilized when 1688 is
  done,
  as mentioned in the KIP:
  
  The broader security initiative http://kafka-1682/ will add more
  robust
  controls for these types of environments, and this proposal could be
  integrated with that work at the appropriate time. This is also the
  specific request of a large financial services company.
  
  I don't think including the functionality now (as it's relatively
 simple)
  would preclude integration into 1688. At that point the implementation
 of
  the check might change, but as it's a broker config, there shouldn't be
  concerns about backward compatibility.
  
  Hope that helps
  
  Thanks
  
  Jeff
  
  On Fri, Mar 20, 2015 at 12:26 PM, Jun Rao j...@confluent.io wrote:
  
   Yes, we can discuss the implementation separately.
  
   As for the proposal itself, have you looked at KAFKA-1688? Could this
  just
   be a special case for authorization and be included there?
  
   Thanks,
  
   Jun
  
   On Wed, Mar 18, 2015 at 6:26 PM, Jeff Holoman jholo...@cloudera.com
 
   wrote:
  
One other thought. Does the timing of the implementation (or lack
   thereof)
affect the proposal? It seems like the question you are asking is
 an
implementation detail in terms of when the work would be done. If
  there
isn't really support for the KIP that's ok, just wanting to make
 sure
  we
are segmenting the vote for the KIP from concerns about
 implementation
timing.
   
Thanks!
   
Jeff
   
On Wed, Mar 18, 2015 at 9:22 PM, Jeff Holoman
 jholo...@cloudera.com
wrote:
   
 Hey Jun thanks for the comment.

 Is the plan to re-factor the SocketServer implementation
  significantly?
 The current check is just in the acceptor. Does this change with
 the
 refactor?

 Thanks

 Jeff





 On Wed, Mar 18, 2015 at 7:25 PM

Re: [VOTE] KIP-7 Security - IP Filtering

2015-03-20 Thread Jeff Holoman
Hey Jun,

The intent was for the same functionality to be utilized when 1688 is done,
as mentioned in the KIP:

The broader security initiative http://kafka-1682/ will add more robust
controls for these types of environments, and this proposal could be
integrated with that work at the appropriate time. This is also the
specific request of a large financial services company.

I don't think including the functionality now (as it's relatively simple)
would preclude integration into 1688. At that point the implementation of
the check might change, but as it's a broker config, there shouldn't be
concerns about backward compatibility.

Hope that helps

Thanks

Jeff

On Fri, Mar 20, 2015 at 12:26 PM, Jun Rao j...@confluent.io wrote:

 Yes, we can discuss the implementation separately.

 As for the proposal itself, have you looked at KAFKA-1688? Could this just
 be a special case for authorization and be included there?

 Thanks,

 Jun

 On Wed, Mar 18, 2015 at 6:26 PM, Jeff Holoman jholo...@cloudera.com
 wrote:

  One other thought. Does the timing of the implementation (or lack
 thereof)
  affect the proposal? It seems like the question you are asking is an
  implementation detail in terms of when the work would be done. If there
  isn't really support for the KIP that's ok, just wanting to make sure we
  are segmenting the vote for the KIP from concerns about implementation
  timing.
 
  Thanks!
 
  Jeff
 
  On Wed, Mar 18, 2015 at 9:22 PM, Jeff Holoman jholo...@cloudera.com
  wrote:
 
   Hey Jun thanks for the comment.
  
   Is the plan to re-factor the SocketServer implementation significantly?
   The current check is just in the acceptor. Does this change with the
   refactor?
  
   Thanks
  
   Jeff
  
  
  
  
  
   On Wed, Mar 18, 2015 at 7:25 PM, Jun Rao j...@confluent.io wrote:
  
   The proposal sounds reasonable. Timing wise, since we plan to refactor
  the
   network layer code in the broker, perhaps this can wait until
 KAFKA-1928
   is
   done?
  
   Thanks,
  
   Jun
  
   On Tue, Mar 17, 2015 at 6:56 AM, Jeff Holoman jholo...@cloudera.com
   wrote:
  
bump
   
On Tue, Mar 3, 2015 at 8:12 PM, Jeff Holoman jholo...@cloudera.com
 
wrote:
   
 Guozhang,

 The way the patch is implemented, the check is done in the
 acceptor
thread
 accept() method of the Socket Server, just before
 connectionQuotas.

 Thanks

 Jeff

 On Tue, Mar 3, 2015 at 7:59 PM, Guozhang Wang wangg...@gmail.com
 
wrote:

 Jeff,

 I am wondering if the IP filtering rule can be enforced at the
  socket
 server level instead of the Kafka API level?

 Guozhang

 On Tue, Mar 3, 2015 at 2:24 PM, Jiangjie Qin
   j...@linkedin.com.invalid

 wrote:

  +1 (non-binding)
 
  On 3/3/15, 1:17 PM, Gwen Shapira gshap...@cloudera.com
  wrote:
 
  +1 (non-binding)
  
  On Tue, Mar 3, 2015 at 12:44 PM, Jeff Holoman 
   jholo...@cloudera.com

  wrote:
   Details in the wiki.
  
  
  
  
 

   
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+F
  iltering
  
  
  
   --
   Jeff Holoman
   Systems Engineer
 
 


 --
 -- Guozhang




 --
 Jeff Holoman
 Systems Engineer




   
   
--
Jeff Holoman
Systems Engineer
   
  
  
  
  
   --
   Jeff Holoman
   Systems Engineer
  
  
  
  
 
 
  --
  Jeff Holoman
  Systems Engineer
 




-- 
Jeff Holoman
Systems Engineer


Re: [VOTE] KIP-7 Security - IP Filtering

2015-03-17 Thread Jeff Holoman
bump

On Tue, Mar 3, 2015 at 8:12 PM, Jeff Holoman jholo...@cloudera.com wrote:

 Guozhang,

 The way the patch is implemented, the check is done in the acceptor thread
 accept() method of the Socket Server, just before connectionQuotas.

 Thanks

 Jeff

 On Tue, Mar 3, 2015 at 7:59 PM, Guozhang Wang wangg...@gmail.com wrote:

 Jeff,

 I am wondering if the IP filtering rule can be enforced at the socket
 server level instead of the Kafka API level?

 Guozhang

 On Tue, Mar 3, 2015 at 2:24 PM, Jiangjie Qin j...@linkedin.com.invalid
 wrote:

  +1 (non-binding)
 
  On 3/3/15, 1:17 PM, Gwen Shapira gshap...@cloudera.com wrote:
 
  +1 (non-binding)
  
  On Tue, Mar 3, 2015 at 12:44 PM, Jeff Holoman jholo...@cloudera.com
  wrote:
   Details in the wiki.
  
  
  
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+F
  iltering
  
  
  
   --
   Jeff Holoman
   Systems Engineer
 
 


 --
 -- Guozhang




 --
 Jeff Holoman
 Systems Engineer






-- 
Jeff Holoman
Systems Engineer


[jira] [Commented] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-03-14 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14362217#comment-14362217
 ] 

Jeff Holoman commented on KAFKA-1810:
-

Updated reviewboard https://reviews.apache.org/r/29714/diff/
 against branch origin/trunk

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network, security
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch, 
 KAFKA-1810_2015-03-15_01:13:12.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29714: Patch for KAFKA-1810

2015-03-14 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29714/
---

(Updated March 15, 2015, 5:13 a.m.)


Review request for kafka.


Bugs: KAFKA-1810
https://issues.apache.org/jira/browse/KAFKA-1810


Repository: kafka


Description (updated)
---

KAFKA-1810 ConfigDef Refactor


Diffs (updated)
-

  core/src/main/scala/kafka/network/IPFilter.scala PRE-CREATION 
  core/src/main/scala/kafka/network/SocketServer.scala 
76ce41aed6e04ac5ba88395c4d5008aca17f9a73 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
46d21c73f1feb3410751899380b35da0c37c975c 
  core/src/main/scala/kafka/server/KafkaServer.scala 
dddef938fabae157ed8644536eb1a2f329fb42b7 
  core/src/test/scala/unit/kafka/network/IpFilterTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
0af23abf146d99e3d6cf31e5d6b95a9e63318ddb 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
191251d1340b5e5b2d649b37af3c6c1896d07e6e 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
7f47e6f9a74314ed9e9f19d0c97931f3f2e49259 

Diff: https://reviews.apache.org/r/29714/diff/


Testing
---

This code centers around a new class, CIDRRange in IPFilter.scala. The IPFilter 
class is created and holds two fields, the ruleType (allow|deny|none) and a 
list of CIDRRange objects. This is used in the Socket Server acceptor thread. 
The check does an exists on the value in the list if the rule type is allow or 
deny. On object creation, we pre-calculate the lower and upper range values and 
store those as a BigInt. The overhead of the check should be fairly minimal as 
it involves converting the incoming IP Address to a BigInt and then just doing 
a compare to the low/high values. In writing this review up I realized that I 
can optimize this further to convert to bigint first and move that conversion 
out of the range check, which I can address.

Testing covers the CIDRRange and IPFilter classes and validation of IPV6, IPV4, 
and configurations. Additionally the functionality is tested in 
SocketServerTest. Other changes are just to assist in configuration.

I modified the SocketServerTest to use a method for grabbing the Socket server 
to make the code a bit more concise.

One key point is that, if there is an error in configuration, we halt the 
startup of the broker. The thinking there is that if you screw up 
security-related configs, you want to know about it right away rather than 
silently accept connections. (thanks Joe Stein for the input).

There are two new exceptions realted to this functionality, one to handle 
configuration errors, and one to handle blocking the request. Currently the 
level is set to INFO. Does it make sense to move this to WARN ?


Thanks,

Jeff Holoman



[jira] [Updated] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-03-14 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1810:

Attachment: KAFKA-1810_2015-03-15_01:13:12.patch

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network, security
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch, 
 KAFKA-1810_2015-03-15_01:13:12.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29714: Patch for KAFKA-1810

2015-03-14 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29714/
---

(Updated March 15, 2015, 5:26 a.m.)


Review request for kafka.


Bugs: KAFKA-1810
https://issues.apache.org/jira/browse/KAFKA-1810


Repository: kafka


Description
---

KAFKA-1810 ConfigDef Refactor


Diffs
-

  core/src/main/scala/kafka/network/IPFilter.scala PRE-CREATION 
  core/src/main/scala/kafka/network/SocketServer.scala 
76ce41aed6e04ac5ba88395c4d5008aca17f9a73 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
46d21c73f1feb3410751899380b35da0c37c975c 
  core/src/main/scala/kafka/server/KafkaServer.scala 
dddef938fabae157ed8644536eb1a2f329fb42b7 
  core/src/test/scala/unit/kafka/network/IpFilterTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
0af23abf146d99e3d6cf31e5d6b95a9e63318ddb 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
191251d1340b5e5b2d649b37af3c6c1896d07e6e 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
7f47e6f9a74314ed9e9f19d0c97931f3f2e49259 

Diff: https://reviews.apache.org/r/29714/diff/


Testing (updated)
---

This code centers around a new class, CIDRRange in IpFilter.scala. The IpFilter 
class is created and holds two fields, the ruleType (allow|deny|none) and a 
list of CIDRRange objects. This is used in the Socket Server acceptor thread. 
The check does an exists on the value in the list if the rule type is allow or 
deny. On object creation, we pre-calculate the lower and upper range values and 
store those as a BigInt. The overhead of the check should be fairly minimal as 
it involves converting the incoming IP Address to a BigInt and then just doing 
a compare to the low/high values.

Testing covers the CIDRRange and IPFilter classes and validation of IPV6, IPV4, 
and configurations. Additionally the functionality is tested in 
SocketServerTest. Other changes are just to assist in configuration.

I modified the SocketServerTest to use a method for grabbing the Socket server 
to make the code a bit more concise.

One key point is that, if there is an error in configuration, we halt the 
startup of the broker. The thinking there is that if you screw up 
security-related configs, you want to know about it right away rather than 
silently accept connections. (thanks Joe Stein for the input).

There is one new exceptions realted to this functionality, to handle blocking 
the request. Currently the level is set to WARN.

More details are in KIP-7
https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+Filtering


Thanks,

Jeff Holoman



Re: Review Request 29714: Patch for KAFKA-1810

2015-03-14 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29714/#review76500
---



core/src/main/scala/kafka/server/KafkaConfig.scala
https://reviews.apache.org/r/29714/#comment124033

Need to delete


- Jeff Holoman


On March 15, 2015, 5:13 a.m., Jeff Holoman wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29714/
 ---
 
 (Updated March 15, 2015, 5:13 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1810
 https://issues.apache.org/jira/browse/KAFKA-1810
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1810 ConfigDef Refactor
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/IPFilter.scala PRE-CREATION 
   core/src/main/scala/kafka/network/SocketServer.scala 
 76ce41aed6e04ac5ba88395c4d5008aca17f9a73 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 46d21c73f1feb3410751899380b35da0c37c975c 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 dddef938fabae157ed8644536eb1a2f329fb42b7 
   core/src/test/scala/unit/kafka/network/IpFilterTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
 0af23abf146d99e3d6cf31e5d6b95a9e63318ddb 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 191251d1340b5e5b2d649b37af3c6c1896d07e6e 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 7f47e6f9a74314ed9e9f19d0c97931f3f2e49259 
 
 Diff: https://reviews.apache.org/r/29714/diff/
 
 
 Testing
 ---
 
 This code centers around a new class, CIDRRange in IPFilter.scala. The 
 IPFilter class is created and holds two fields, the ruleType 
 (allow|deny|none) and a list of CIDRRange objects. This is used in the Socket 
 Server acceptor thread. The check does an exists on the value in the list if 
 the rule type is allow or deny. On object creation, we pre-calculate the 
 lower and upper range values and store those as a BigInt. The overhead of the 
 check should be fairly minimal as it involves converting the incoming IP 
 Address to a BigInt and then just doing a compare to the low/high values. In 
 writing this review up I realized that I can optimize this further to convert 
 to bigint first and move that conversion out of the range check, which I can 
 address.
 
 Testing covers the CIDRRange and IPFilter classes and validation of IPV6, 
 IPV4, and configurations. Additionally the functionality is tested in 
 SocketServerTest. Other changes are just to assist in configuration.
 
 I modified the SocketServerTest to use a method for grabbing the Socket 
 server to make the code a bit more concise.
 
 One key point is that, if there is an error in configuration, we halt the 
 startup of the broker. The thinking there is that if you screw up 
 security-related configs, you want to know about it right away rather than 
 silently accept connections. (thanks Joe Stein for the input).
 
 There are two new exceptions realted to this functionality, one to handle 
 configuration errors, and one to handle blocking the request. Currently the 
 level is set to INFO. Does it make sense to move this to WARN ?
 
 
 Thanks,
 
 Jeff Holoman
 




[jira] [Commented] (KAFKA-1990) Add unlimited time-based log retention

2015-03-09 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354325#comment-14354325
 ] 

Jeff Holoman commented on KAFKA-1990:
-

Updated reviewboard https://reviews.apache.org/r/31590/diff/
 against branch origin/trunk

 Add unlimited time-based log retention
 --

 Key: KAFKA-1990
 URL: https://issues.apache.org/jira/browse/KAFKA-1990
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
  Labels: newbie
 Attachments: KAFKA-1990_2015-03-10_00:55:11.patch


 Currently you can set
   log.retention.bytes = -1
 to disable size based retention (in fact that is the default). However, there 
 is no equivalent for time based retention. You can set time based retention 
 to something really big like 2147483647 hours, which in practical terms is 
 forever, but it is kind of silly to require this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31590: Patch for KAFKA-1990

2015-03-09 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31590/
---

(Updated March 10, 2015, 4:58 a.m.)


Review request for kafka and Gwen Shapira.


Bugs: KAFKA-1990
https://issues.apache.org/jira/browse/KAFKA-1990


Repository: kafka


Description
---

KAFKA-1990


Diffs
-

  core/src/main/scala/kafka/log/LogConfig.scala 
8b67aee3a37765178b30d79e9e7bb882bdc89c29 
  core/src/main/scala/kafka/log/LogManager.scala 
47d250af62c1aa53d11204a332d0684fb4217c8d 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
48e33626695ad8a28b0018362ac225f11df94973 
  core/src/test/scala/kafka/log/LogConfigTest.scala 
9690f141be75202973085025444b52208ebd5ab2 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
ee0b21e6a94ad79c11dd08f6e5adf98c333e2ec9 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
7f47e6f9a74314ed9e9f19d0c97931f3f2e49259 

Diff: https://reviews.apache.org/r/31590/diff/


Testing (updated)
---

Updated for ConfigDef. 

Open question. Is there any reason that LogConfigTest is located in kafka/log 
rather than unit/kafka/log ?


Thanks,

Jeff Holoman



Re: Review Request 31590: Patch for KAFKA-1990

2015-03-09 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31590/
---

(Updated March 10, 2015, 4:55 a.m.)


Review request for kafka and Gwen Shapira.


Summary (updated)
-

Patch for KAFKA-1990


Bugs: KAFKA-1990
https://issues.apache.org/jira/browse/KAFKA-1990


Repository: kafka


Description (updated)
---

KAFKA-1990


Diffs (updated)
-

  core/src/main/scala/kafka/log/LogConfig.scala 
8b67aee3a37765178b30d79e9e7bb882bdc89c29 
  core/src/main/scala/kafka/log/LogManager.scala 
47d250af62c1aa53d11204a332d0684fb4217c8d 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
48e33626695ad8a28b0018362ac225f11df94973 
  core/src/test/scala/kafka/log/LogConfigTest.scala 
9690f141be75202973085025444b52208ebd5ab2 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
ee0b21e6a94ad79c11dd08f6e5adf98c333e2ec9 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
7f47e6f9a74314ed9e9f19d0c97931f3f2e49259 

Diff: https://reviews.apache.org/r/31590/diff/


Testing
---


Thanks,

Jeff Holoman



[VOTE] KIP-7 Security - IP Filtering

2015-03-03 Thread Jeff Holoman
Details in the wiki.


https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+Filtering



-- 
Jeff Holoman
Systems Engineer


Re: [VOTE] KIP-2 - Refactor brokers to allow listening on multiple ports and IPs

2015-03-03 Thread Jeff Holoman
+1 non-binding of course

On Tue, Mar 3, 2015 at 3:18 PM, Joe Stein joe.st...@stealth.ly wrote:

 +1

 ~ Joe Stein
 - - - - - - - - - - - - - - - - -

   http://www.stealth.ly
 - - - - - - - - - - - - - - - - -

 On Tue, Mar 3, 2015 at 3:14 PM, Gwen Shapira gshap...@cloudera.com
 wrote:

  Details are in the wiki:
 
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs
 




-- 
Jeff Holoman
Systems Engineer


Re: [VOTE] KIP-7 Security - IP Filtering

2015-03-03 Thread Jeff Holoman
Guozhang,

The way the patch is implemented, the check is done in the acceptor thread
accept() method of the Socket Server, just before connectionQuotas.

Thanks

Jeff

On Tue, Mar 3, 2015 at 7:59 PM, Guozhang Wang wangg...@gmail.com wrote:

 Jeff,

 I am wondering if the IP filtering rule can be enforced at the socket
 server level instead of the Kafka API level?

 Guozhang

 On Tue, Mar 3, 2015 at 2:24 PM, Jiangjie Qin j...@linkedin.com.invalid
 wrote:

  +1 (non-binding)
 
  On 3/3/15, 1:17 PM, Gwen Shapira gshap...@cloudera.com wrote:
 
  +1 (non-binding)
  
  On Tue, Mar 3, 2015 at 12:44 PM, Jeff Holoman jholo...@cloudera.com
  wrote:
   Details in the wiki.
  
  
  
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+F
  iltering
  
  
  
   --
   Jeff Holoman
   Systems Engineer
 
 


 --
 -- Guozhang




-- 
Jeff Holoman
Systems Engineer


Re: [KIP-DISCUSSION] KIP-7 Security - IP Filtering

2015-03-03 Thread Jeff Holoman
Hey Joel, good questions

As a first thought, my experience with customers in large corporate
environments probably has me somewhat jaded :). You know it really
shouldn't take 3 weeks to get ports opened on a load balancer, but, that
really does happen. Coordination across those teams also can and should /
does happen but I've noted that operators appreciate measures they can take
that keep them out of more internal process.

1) Yes probably. After all we're really just checking what's returned from
InetAddress and trusting that. The check is pretty lightweight. I think
what you are getting at is that a security check that doesn't go all the
way can be bad as it can engender a false sense sense of security and end
up leaving the system more vulnerable to attack than if other, more
standard, approaches are taken. This is a fair point. I'm not deep enough
into network security to comment all that intelligently but I do think that
reducing the exposure to say, IP spoofing on internal traffic vs
free-for-all data consumption is a step in the right direction.

2) Yes they may have access to this, and it could be redundant. On
customers that I interface with, operators typically get their root-level
privileges through something like PowerBroker, so access to IPTables is not
a given, and even if it's available typically does not fall within their
realm of accepted responsibilities. Additionally, when I first got this
request I suggested IPTables and was told that due to the difficulties and
complexities of configuration and management (from their perspective) that
it would not be an acceptable solution (also the it's not in the corporate
standard line)

I noted in the KIP that I look at this not only as a potential security
measure by reducing attack vector size but also as a guard against human
error. Hardcoded configs sometimes make their way all the way to production
and this would help to limit that.

You could argue that it might not be Kafka's responsibility to enforce this
type of control, but there is precedence here with HDFS (dfs.hosts and
dfs.hosts.exclude) and Flume (*https://issues.apache.org/jira/browse/FLUME-2189
https://issues.apache.org/jira/browse/FLUME-2189*).

In short, I don't think that this supplants more robust security
functionality but I do think it gives an additional (lightweight) control
which would be useful. Security is about defense in depth, and this raises
the bar a tad.

Thanks

Jeff

On Tue, Mar 3, 2015 at 8:58 PM, Joel Koshy jjkosh...@gmail.com wrote:

 The proposal itself looks reasonable, but I have a couple of
 questions as you made reference to operators of the system; and
 network team in your wiki.

 - Are spoofing attacks a concern even with this in place? If so, it
   would require some sort of internal ingress filtering which
   presumably need cooperation with network teams right?
 - Also, the operators of the (Kafka) system really should have access
   to iptables on the Kafka brokers so wouldn't this feature be
   effectively redundant?

 Thanks,

 Joel

 On Thu, Jan 22, 2015 at 01:50:41PM -0500, Joe Stein wrote:
  Hey Jeff, thanks for the patch and writing this up.
 
  I think the approach to explicitly deny and then set what is allowed or
  explicitly allow then deny specifics makes sense. Supporting CIDR
 notation
  and ip4 and ip6 both good too.
 
  Waiting for KAFKA-1845 to get committed I think makes sense before
  reworking this anymore right now, yes. Andrii posted a patch yesterday
 for
  it so hopefully in the next ~ week(s).
 
  Not sure what other folks think of this approach but whatever that is
 would
  be good to have it in prior to reworking for the config def changes.
 
  /***
   Joe Stein
   Founder, Principal Consultant
   Big Data Open Source Security LLC
   http://www.stealth.ly
   Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
  /
 
  On Wed, Jan 21, 2015 at 8:47 PM, Jeff Holoman jholo...@cloudera.com
 wrote:
 
   Posted a KIP for IP Filtering:
  
  
  
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+Filtering
  
   Relevant JIRA:
   https://issues.apache.org/jira/browse/KAFKA-1810
  
   Appreciate any feedback.
  
   Thanks
  
   Jeff
  




-- 
Jeff Holoman
Systems Engineer


[jira] [Assigned] (KAFKA-1990) Add unlimited time-based log retention

2015-03-01 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman reassigned KAFKA-1990:
---

Assignee: Jeff Holoman

 Add unlimited time-based log retention
 --

 Key: KAFKA-1990
 URL: https://issues.apache.org/jira/browse/KAFKA-1990
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
  Labels: newbie

 Currently you can set
   log.retention.bytes = -1
 to disable size based retention (in fact that is the default). However, there 
 is no equivalent for time based retention. You can set time based retention 
 to something really big like 2147483647 hours, which in practical terms is 
 forever, but it is kind of silly to require this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31591: Patch for KAFKA-1992

2015-03-01 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31591/#review74699
---


Looks good to me.

- Jeff Holoman


On March 1, 2015, 7:58 a.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31591/
 ---
 
 (Updated March 1, 2015, 7:58 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1992
 https://issues.apache.org/jira/browse/KAFKA-1992
 
 
 Repository: kafka
 
 
 Description
 ---
 
 remove unnecessary requiredAcks parameter and clean up few comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/cluster/Partition.scala 
 c4bf48a801007ebe7497077d2018d6dffe1677d4 
   core/src/main/scala/kafka/server/DelayedProduce.scala 
 4d763bf05efb24a394662721292fc54d32467969 
 
 Diff: https://reviews.apache.org/r/31591/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Gwen Shapira
 




[jira] [Updated] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-02-25 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1929:

Attachment: KAFKA-1929.patch

 Convert core kafka module to use the errors in org.apache.kafka.common.errors
 -

 Key: KAFKA-1929
 URL: https://issues.apache.org/jira/browse/KAFKA-1929
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Attachments: KAFKA-1929.patch


 With the introduction of the common package there are now a lot of errors 
 duplicated in both the common package and in the server. We should refactor 
 the server code (but not the scala clients) to switch over to the exceptions 
 in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-02-25 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1929:

Status: Patch Available  (was: Open)

 Convert core kafka module to use the errors in org.apache.kafka.common.errors
 -

 Key: KAFKA-1929
 URL: https://issues.apache.org/jira/browse/KAFKA-1929
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Attachments: KAFKA-1929.patch


 With the introduction of the common package there are now a lot of errors 
 duplicated in both the common package and in the server. We should refactor 
 the server code (but not the scala clients) to switch over to the exceptions 
 in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31460: Patch for KAFKA-1929

2015-02-25 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31460/
---

Review request for kafka.


Bugs: KAFKA-1929
https://issues.apache.org/jira/browse/KAFKA-1929


Repository: kafka


Description
---

First Pass


Diffs
-

  
clients/src/main/java/org/apache/kafka/common/errors/LeaderNotAvailableException.java
 9d7ebd47a8439f38104ba62754227d4189418f62 
  
clients/src/main/java/org/apache/kafka/common/errors/OffsetMetadataTooLarge.java
 0be2f500685b09822aac3ccc9404bfacbbb34d17 
  clients/src/main/java/org/apache/kafka/common/errors/RetriableException.java 
6c639a972d7e4700fdbdcb32333832cbe8f991f3 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
a8deac4ce5149129d0a6f44c0526af9d55649a36 
  core/src/main/scala/kafka/cluster/Partition.scala 
e6ad8be5e33b6fb31c078ad78f8de709869ddc04 
  core/src/main/scala/kafka/common/ErrorMapping.scala 
eedc2f5f21dd8755fba891998456351622e17047 
  core/src/main/scala/kafka/common/InvalidTopicException.scala 
59f887490d4172d7e89450a487dcbfabee73cb81 
  core/src/main/scala/kafka/common/NotEnoughReplicasAfterAppendException.scala 
c4f9def6162e9e25b273ca00b0974a4096cca041 
  core/src/main/scala/kafka/common/NotEnoughReplicasException.scala 
bfbe0ee4a5a15df69a94e7d7792bd11707787d92 
  core/src/main/scala/kafka/common/NotLeaderForPartitionException.scala 
b4558f89f0a23036c653aaba7f864c87fe952ae8 
  core/src/main/scala/kafka/common/OffsetMetadataTooLargeException.scala 
50edb273b3a799f8a8077a15329afc4ceda4abfb 
  core/src/main/scala/kafka/common/OffsetOutOfRangeException.scala 
0a2514cc0d99f05edf3b23dd79604708042e66ee 
  core/src/main/scala/kafka/common/Topic.scala 
ad759786d1c22f67c47808c0b8f227eb2b1a9aa8 
  core/src/main/scala/kafka/log/Log.scala 
846023bb98d0fa0603016466360c97071ac935ea 
  core/src/main/scala/kafka/server/DelayedFetch.scala 
dd602ee2e65c2cd4ec363c75fa5d0b3c038b1ed2 
  core/src/main/scala/kafka/server/KafkaApis.scala 
6ee7d8819a9ef923f3a65c865a0a3d8ded8845f0 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
fb948b9ab28c516e81dab14dcbe211dcd99842b6 
  core/src/test/scala/other/kafka/StressTestLog.scala 
e19b8b28383554512a0c1f651d6764650d8db9c4 
  core/src/test/scala/unit/kafka/common/TopicTest.scala 
0fb25880c24adef906cd06359b624e7c8eb94ca6 
  core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
aeb7a19acaefabcc161c2ee6144a56d9a8999a81 
  core/src/test/scala/unit/kafka/log/LogManagerTest.scala 
90cd53033fafaf952ba0b3f1e28b0e1f1ad3ea76 
  core/src/test/scala/unit/kafka/log/LogTest.scala 
c2dd8eb69da8c0982a0dd20231c6f8bd58eb623e 

Diff: https://reviews.apache.org/r/31460/diff/


Testing
---


Thanks,

Jeff Holoman



[jira] [Commented] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-02-25 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14337828#comment-14337828
 ] 

Jeff Holoman commented on KAFKA-1929:
-

Created reviewboard https://reviews.apache.org/r/31460/diff/
 against branch origin/trunk

 Convert core kafka module to use the errors in org.apache.kafka.common.errors
 -

 Key: KAFKA-1929
 URL: https://issues.apache.org/jira/browse/KAFKA-1929
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Attachments: KAFKA-1929.patch


 With the introduction of the common package there are now a lot of errors 
 duplicated in both the common package and in the server. We should refactor 
 the server code (but not the scala clients) to switch over to the exceptions 
 in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations

2015-02-24 Thread Jeff Holoman
   JVM
client api for these operations. Currently people
  rely
   on
AdminUtils
   which
is totally sketchy. I think we probably need
 another
client
   under
   clients/
that exposes administrative functionality. We will
  need
 this
   just
to
properly test the new apis, I suspect. We should
  figure
out
  that
API.
   
11. The other information that would be really
 useful
   to
 get
   would
be
information about partitions--how much data is in
 the
  partition,
what
  are
the segment offsets, what is the log-end offset
 (i.e.
last
   offset),
  what
   is
the compaction point, etc. I think that done right
  this
 would
  be
the
successor to the very awkward OffsetRequest we have
today.
   
-Jay
   
On Wed, Jan 21, 2015 at 10:27 PM, Joe Stein 
   joe.st...@stealth.ly
   wrote:
   
 Hi, created a KIP


   
  
 

   
  
 

   
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations

 JIRA
   https://issues.apache.org/jira/browse/KAFKA-1694

 /***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop 
http://www.twitter.com/allthingshadoop
 
 /

   
  
 

   
  
  
  
   --
   -- Guozhang
  
  
  
 



   
  
 




-- 
Jeff Holoman
Systems Engineer


[jira] [Commented] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-02-19 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327974#comment-14327974
 ] 

Jeff Holoman commented on KAFKA-1929:
-

Here's the list of duplicated errors

InvalidTopicException
LeaderNotAvailable - Used in scala producer
NotEnoughReplicasException
NotEnoughReplicasAfterAppendException
NotLeaderForPartitionException
OffsetMetadataTooLarge (Renaming to OffsetMetadataTooLargeException)
OffsetOutOfRangeException
UnkownTopicOrPartitionException - Also in scala producer

In most cases removing these from core and replacing the ErrorMapping with the 
error from o.a.k is an easy fix, the only real difference being that the errors 
in o.a.k present a different exception hierarchy.

All errors Extend RuntimeException - KafkaException:

OffsetMetadataTooLarge |  12
NotEnoughReplicasException | 19
ApiException - InvalidTopicException | 17
ApiException - NotEnoughReplicasAfterAppendException | 20
ApiException - RetriableException - OffsetOutOfRange | 1 
ApiException - RetriableException - UnkownTopicOrPartitionException | 3
ApiException - RetriableException - InvalidMetadataException - 
LeaderNotAvailableException | 5
ApiException - RetriableException - InvalidMetadataException - 
NotLeaderForPartitionException | 6

[~jkreps] you mentioned leaving the Scala clients as is, how do you want to 
handle UnknownTopicOrPartitionException
and LeaderNotAvailable which are in a number of different places in core?

Additionally, I noticed that OFFSET_LOAD_IN_PROGRESS, 
CONSUMER_COORDINATOR_NOT_AVAILABLE, and  NOT_COORDINATOR_FOR_CONSUMER (which 
map to 14,15,16 in the protocol) are not mapped in Errors.java to named 
exceptions like their counterparts, instead implemented as ApiException. Is it 
worth implementing classes for these for consistency?

Thanks

Jeff

 Convert core kafka module to use the errors in org.apache.kafka.common.errors
 -

 Key: KAFKA-1929
 URL: https://issues.apache.org/jira/browse/KAFKA-1929
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman

 With the introduction of the common package there are now a lot of errors 
 duplicated in both the common package and in the server. We should refactor 
 the server code (but not the scala clients) to switch over to the exceptions 
 in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1929) Convert core kafka module to use the errors in org.apache.kafka.common.errors

2015-02-08 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman reassigned KAFKA-1929:
---

Assignee: Jeff Holoman

 Convert core kafka module to use the errors in org.apache.kafka.common.errors
 -

 Key: KAFKA-1929
 URL: https://issues.apache.org/jira/browse/KAFKA-1929
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Jeff Holoman

 With the introduction of the common package there are now a lot of errors 
 duplicated in both the common package and in the server. We should refactor 
 the server code (but not the scala clients) to switch over to the exceptions 
 in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] ConfigDec Broker Changes on Trunk

2015-02-06 Thread Jeff Holoman
I think this is a good change. Is there general agreement that we are
moving forward with this approach? It would be nice to start using this for
future work.

Thanks

Jeff

On Tue, Feb 3, 2015 at 9:34 AM, Joe Stein joe.st...@stealth.ly wrote:

 I updated the RB changing some of the HIGH to MEDIUM and LOW.

 There might be other or different opinions and they may change over time so
 I don't really see h/m/l as a blocker to the patch going in.

 It would be great to take all the rb feedback from today and then tomorrow
 rebase and include changes for a new patch.

 Then over the next day or two review, test and commit to trunk (or re-work
 if necessary).

 /***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
 /

 On Tue, Feb 3, 2015 at 4:56 AM, Andrii Biletskyi 
 andrii.bilets...@stealth.ly wrote:

  It'd be great to have it on trunk.
  As I mentioned under jira ticket (KAFKA-1845) current implementation
 lacks
  correct Importance settings.
  I'd be grateful if somebody could help me with it (a simple mapping
 between
  config setting and importance or comments right in the review board would
  suffice).
 
  Thanks,
  Andrii Biletskyi
 
  On Mon, Feb 2, 2015 at 11:38 PM, Gwen Shapira gshap...@cloudera.com
  wrote:
 
   Strong +1 from me (obviously). Lots of good reasons to do it:
   consistency, code reuse, better validations, etc, etc.
  
   I had one comment on the patch in RB, but it can also be refactored as
   follow up JIRA to avoid blocking everyone who is waiting on this.
  
   Gwen
  
   On Mon, Feb 2, 2015 at 1:31 PM, Joe Stein joe.st...@stealth.ly
 wrote:
Hey, I wanted to start a quick convo around some changes on trunk.
 Not
   sure
this requires a KIP since it is kind of internal and shouldn't affect
   users
but we can decide if so and link this thread to that KIP if so (and
  keep
the discussion going on the thread if makes sense).
   
Before making any other broker changes I wanted to see what folks
  thought
about https://issues.apache.org/jira/browse/KAFKA-1845 ConfigDec
  patch.
   
I agree it will be nice to standardize and use one configuration and
validation library across the board. It helps in a lot of different
   changes
we have been discussing also in 0.8.3 and think we should make sure
 it
  is
what we want if so then: review, commit and keep going.
   
Thoughts?
   
/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/
  
 




-- 
Jeff Holoman
Systems Engineer


Re: Review Request 30126: Patch for KAFKA-1845

2015-02-03 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30126/#review70748
---



core/src/main/scala/kafka/server/KafkaConfig.scala
https://reviews.apache.org/r/30126/#comment116129

Is this is the right way to do this rather than having these in Validators?


- Jeff Holoman


On Jan. 21, 2015, 5:49 p.m., Andrii Biletskyi wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30126/
 ---
 
 (Updated Jan. 21, 2015, 5:49 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1845
 https://issues.apache.org/jira/browse/KAFKA-1845
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1845 - Fixed merge conflicts, ported added configs to KafkaConfig
 
 
 KAFKA-1845 - KafkaConfig to ConfigDef: moved validateValues so it's called on 
 instantiating KafkaConfig
 
 
 KAFKA-1845 - KafkaConfig to ConfigDef: MaxConnectionsPerIpOverrides refactored
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
   core/src/main/scala/kafka/Kafka.scala 
 77a49e12af6f869e63230162e9f87a7b0b12b610 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
 4a31c7271c2d0a4b9e8b28be729340ecfa0696e5 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 6d74983472249eac808d361344c58cc2858ec971 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 89200da30a04943f0b9befe84ab17e62b747c8c4 
   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
 6879e730282185bda3d6bc3659cb15af0672cecf 
   core/src/test/scala/integration/kafka/api/ProducerCompressionTest.scala 
 e63558889272bc76551accdfd554bdafde2e0dd6 
   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
 90c0b7a19c7af8e5416e4bdba62b9824f1abd5ab 
   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
 b15237b76def3b234924280fa3fdb25dbb0cc0dc 
   core/src/test/scala/unit/kafka/admin/AddPartitionsTest.scala 
 1bf2667f47853585bc33ffb3e28256ec5f24ae84 
   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
 e28979827110dfbbb92fe5b152e7f1cc973de400 
   core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
 33c27678bf8ae8feebcbcdaa4b90a1963157b4a5 
   core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala 
 c0355cc0135c6af2e346b4715659353a31723b86 
   
 core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
 a17e8532c44aadf84b8da3a57bcc797a848b5020 
   core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala 
 95303e098d40cd790fb370e9b5a47d20860a6da3 
   core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
 25845abbcad2e79f56f729e59239b738d3ddbc9d 
   core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
 a5386a03b62956bc440b40783247c8cdf7432315 
   core/src/test/scala/unit/kafka/integration/RollingBounceTest.scala 
 eab4b5f619015af42e4554660eafb5208e72ea33 
   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
 35dc071b1056e775326981573c9618d8046e601d 
   core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
 ba3bcdcd1de9843e75e5395dff2fc31b39a5a9d5 
   
 core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
  d6248b09bb0f86ee7d3bd0ebce5b99135491453b 
   core/src/test/scala/unit/kafka/log/LogTest.scala 
 c2dd8eb69da8c0982a0dd20231c6f8bd58eb623e 
   core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala 
 4ea0489c9fd36983fe190491a086b39413f3a9cd 
   core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
 3cf23b3d6d4460535b90cfb36281714788fc681c 
   core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala 
 1db6ac329f7b54e600802c8a623f80d159d4e69b 
   core/src/test/scala/unit/kafka/producer/ProducerTest.scala 
 ce65dab4910d9182e6774f6ef1a7f45561ec0c23 
   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
 d60d8e0f49443f4dc8bc2cad6e2f951eda28f5cb 
   core/src/test/scala/unit/kafka/server/AdvertiseBrokerTest.scala 
 f0c4a56b61b4f081cf4bee799c6e9c523ff45e19 
   core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
 ad121169a5e80ebe1d311b95b219841ed69388e2 
   core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala 
 8913fc1d59f717c6b3ed12c8362080fb5698986b 
   core/src/test/scala/unit/kafka/server/ISRExpirationTest.scala 
 a703d2715048c5602635127451593903f8d20576 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 82dce80d553957d8b5776a9e140c346d4e07f766 
   core/src/test/scala/unit/kafka/server

Re: Review Request 30403: Patch for KAFKA-1906

2015-02-01 Thread Jeff Holoman
Yep I totally get that. But...you already have to modify the configs
anyway. The primary goals of default values are basically two-fold

1) Get started quickly
2) Provide reasonable values for stuff people don't really know about.

I guess I would just rather see people take the time (30s to 1min maybe?)
to consider where they want their storage location for Kafka to be. I don't
think that's an unreasonable expectation and will save people like me from
having to help people move stuff when they realize / has filled up.

At any rate, I think what's been suggested here is better than /tmp

Thanks

Jeff

On Sat, Jan 31, 2015 at 10:27 AM, Jaikiran Pai jai.forums2...@gmail.com
wrote:

  Hi Jeff,

 I guess you are :)

 Personally, whenever I download and try a new project *in development
 environment* I always just want to get it up and running without having to
 fiddle with configurations. Of course, I do a bit of reading the docs,
 before trying it out, but I like to have the install and run to be
 straightforward without having to change/add configurations. Having
 sensible defaults helps in development environments and in getting started.
 IMO, this param belongs to that category.

 -Jaikiran


 On Thursday 29 January 2015 08:00 PM, Jeff Holoman wrote:

 Maybe I'm in the minority here, but I actually don't think there should be
 a default for this param and you should be required to explicitly set this.

 On Thu, Jan 29, 2015 at 5:43 AM, Jay Kreps boredandr...@gmail.com wrote:



  On Jan. 29, 2015, 6:50 a.m., Gwen Shapira wrote:
   We added --override option to KafkaServer that allows overriding
 default configuration from commandline.
   I believe that just changing the shell script to include --override
 log.dir=${KAFKA_HOME}/data
   may be enough?
  
   overriding configuration from server.properties in code can be very
 unintuitive.
 
  Jaikiran Pai wrote:
  That sounds a good idea. I wasn't aware of the --override option.
 I'll give that a try and if it works then the change will be limited to
 just the scripts.
 
  Jaikiran Pai wrote:
  Hi Gwen,
 
  I had a look at the JIRA
 https://issues.apache.org/jira/browse/KAFKA-1654 which added support for
 --override and also the purpose of that option. From what I see it won't be
 useful in this case, because in this current task, we don't want to
 override a value that has been explicitly set (via server.properties for
 example). Instead we want to handle a case where no explicit value is
 specified for the data log directory and in such cases default it to a path
 which resides under the Kafka install directory.
 
  If we use the --override option in our (startup) scripts to set
 log.dir=${KAFKA_HOME}/data, we will end up forcing this value as the
 log.dir even when the user has intentionally specified a different path for
 the data logs.
 
  Let me know if I misunderstood your suggestion.
 
  Jay Kreps wrote:
  I think you are right that --override won't work but maybe this is
 still a good suggestion?
 
  Something seems odd about force overriding the working directory of
 the process just to set the log directory.
 
  Another option might be to add --default. This would work like
 --override but would provide a default value only if none is specified. I
 think this might be possible because the java.util.Properties we use for
 config supports a hierarchy of defaults. E.g. you can say new
 Properties(defaultProperties). Not sure if this is better or worse.
 
  Thoughts?
 
  Jaikiran Pai wrote:
  Hi Jay,
 
   Another option might be to add --default. This would work like
 --override but would provide a default value only if none is specified. I
 think this might be possible because the java.util.Properties we use for
 config supports a hierarchy of defaults. E.g. you can say new
 Properties(defaultProperties). Not sure if this is better or worse.
 
  I think --default sounds like a good idea which could help us use
 it for other properties too (if we need to). It does look better than the
 current change that I have done, because the Java code then doesn't have to
 worry about how that default value is sourced. We can then just update the
 scripts to set up the default for the log.dir appropriately.
 
  I can work towards adding support for it and will update this patch
 once it's ready.
 
  As for:
 
   Something seems odd about force overriding the working directory
 of the process just to set the log directory.
 
  Sorry, I don't understand what you meant there. Is it something
 about the change that was done to the scripts?

 I guess what I mean is: is there any other reason you might care about
 the working directory of the process? If so we probably don't want to force
 it to be the Kafka directory. If not it may actually be fine and in that
 case I think having relative paths work is nice. I don't personally know
 the answer to this, what is good practice for a server process?


 - Jay

Re: Review Request 28769: Patch for KAFKA-1809

2015-01-29 Thread Jeff Holoman


 On Jan. 23, 2015, 1:57 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaConfig.scala, lines 182-183
  https://reviews.apache.org/r/28769/diff/12/?file=820431#file820431line182
 
  Since this is also used for communication btw the controller and the 
  brokers, perhaps it's better named as sth like 
  intra.broker.security.protocol?

Maybe it makese sense to prepend all security related configs with security, 
eg: security.intra.broker.protocol, security.new.param.for.future.jira With 
all of the upcoming changes it would make security realted configs easy to spot.


- Jeff


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review69281
---


On Jan. 28, 2015, 6:26 p.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28769/
 ---
 
 (Updated Jan. 28, 2015, 6:26 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1809
 https://issues.apache.org/jira/browse/KAFKA-1809
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1890 Fix bug preventing Mirror Maker from successful rebalance; 
 reviewed by Gwen Shapira and Neha Narkhede
 
 
 first commit of refactoring.
 
 
 changed topicmetadata to include brokerendpoints and fixed few unit tests
 
 
 fixing systest and support for binding to default address
 
 
 fixed system tests
 
 
 fix default address binding and ipv6 support
 
 
 fix some issues regarding endpoint parsing. Also, larger segments for systest 
 make the validation much faster
 
 
 added link to security wiki in doc
 
 
 fixing unit test after rename of ProtocolType to SecurityProtocol
 
 
 Following Joe's advice, added security protocol enum on client side, and 
 modified protocol to use ID instead of string.
 
 
 validate producer config against enum
 
 
 add a second protocol for testing and modify SocketServerTests to check on 
 multi-ports
 
 
 Reverted the metadata request changes and removed the explicit security 
 protocol argument. Instead the socketserver will determine the protocol based 
 on the port and add this to the request
 
 
 bump version for UpdateMetadataRequest and added support for rolling upgrades 
 with new config
 
 
 following tests - fixed LeaderAndISR protocol and ZK registration for 
 backward compatibility
 
 
 cleaned up some changes that were actually not necessary. hopefully making 
 this patch slightly easier to review
 
 
 undoing some changes that don't belong here
 
 
 bring back config lost in cleanup
 
 
 fixes neccessary for an all non-plaintext cluster to work
 
 
 minor modifications following comments by Jun
 
 
 added missing license
 
 
 formatting
 
 
 clean up imports
 
 
 cleaned up V2 to not include host+port field. Using use.new.protocol flag to 
 decide which version to serialize
 
 
 change endpoints collection in Broker to Map[protocol, endpoint], mostly to 
 be clear that we intend to have one endpoint per protocol
 
 
 validate that listeners and advertised listeners have unique ports and 
 protocols
 
 
 support legacy configs
 
 
 some fixes following rebase
 
 
 Reverted to backward compatible zk registration, changed use.new.protocol to 
 support multiple versions and few minor fixes
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
   clients/src/main/java/org/apache/kafka/common/protocol/ApiVersion.java 
 PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
 527dd0f9c47fce7310b7a37a9b95bf87f1b9c292 
   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
 a39fab532f73148316a56c0f8e9197f38ea66f79 
   config/server.properties 1614260b71a658b405bb24157c8f12b1f1031aa5 
   core/src/main/scala/kafka/admin/AdminUtils.scala 
 28b12c7b89a56c113b665fbde1b95f873f8624a3 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 285c0333ff43543d3e46444c1cd9374bb883bb59 
   core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
 84f60178f6ebae735c8aa3e79ed93fe21ac4aea7 
   core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
 4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340 
   core/src/main/scala/kafka/api/TopicMetadata.scala 
 0190076df0adf906ecd332284f222ff974b315fc 
   core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
 92ac4e687be22e4800199c0666bfac5e0059e5bb 
   core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
 530982e36b17934b8cc5fb668075a5342e142c59 
   core/src/main/scala/kafka/client/ClientUtils.scala 
 ebba87f0566684c796c26cb76c64b4640a5ccfde 
   core/src/main/scala/kafka/cluster/Broker.scala 
 

Re: 0.9 security features and release schedule

2015-01-25 Thread Jeff Holoman
Justin,

I'd be interested to learn specifically what security features you are
interested in.

There is a doc on the wiki here
https://cwiki.apache.org/confluence/display/KAFKA/Security

As well as the Umbrella JIRA Here:
https://issues.apache.org/jira/browse/KAFKA-1682

Most security-related JIRAs have a component listed as security
https://issues.apache.org/jira/browse/KAFKA-1882?jql=project%20%3D%20KAFKA%20AND%20component%20%3D%20security

There is a quite a bit of plumbing work to be done in order for all of the
security features to come together. There technically isn't a 0.9 branch at
the moment, all of the development is being applied to trunk and since
there isn't as of yet even a 0.8.3 branch, I don't think you'd be in a good
spot trying to pull in a bunch of stuff that isn't complete.

Just my 2 cents.

Thanks

Jeff


On Sun, Jan 25, 2015 at 10:33 AM, Justin Randell justin.rand...@acquia.com
wrote:

 Hi,

 We're assessing Kafka for use at Acquia. We run tens of thousands of
 customer websites and many internal applications on ~9k AWS EC2
 instances.

 We're currently weighing the pros and cons of starting with 0.82 plus
 custom security, or waiting until the security features land in 0.9.

 How likely is Kafka 0.9 to ship in April 2015? (As seen here -
 https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan.)

 How stable is the 0.9 branch? Is it crazy to consider running a 0.9
 beta in production?

 Are there any existing patch sets against 0.8x that implement security
 features?

 thanks,
 Justin




-- 
Jeff Holoman
Systems Engineer


[jira] [Updated] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-25 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1810:

Component/s: security

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network, security
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1782) Junit3 Misusage

2015-01-21 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286491#comment-14286491
 ] 

Jeff Holoman commented on KAFKA-1782:
-

Thank you for the feedback [~guozhang]. I will get to work on this.

 Junit3 Misusage
 ---

 Key: KAFKA-1782
 URL: https://issues.apache.org/jira/browse/KAFKA-1782
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Jeff Holoman
  Labels: newbie
 Fix For: 0.8.3


 This is found while I was working on KAFKA-1580: in many of our cases where 
 we explicitly extend from junit3suite (e.g. ProducerFailureHandlingTest), we 
 are actually misusing a bunch of features that only exist in Junit4, such as 
 (expected=classOf). For example, the following code
 {code}
 import org.scalatest.junit.JUnit3Suite
 import org.junit.Test
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will actually pass even though IOException was not thrown since this 
 annotation is not supported in Junit3. Whereas
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.scalatest.junit.JUnitSuite
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will fail.
 I would propose to not rely on Junit annotations other than @Test itself but 
 use scala unit test annotations instead, for example:
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test
   def testSendOffset() {
 intercept[IOException] {
   //nothing
 }
   }
 }
 {code}
 will fail with a clearer stacktrace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[KIP-DISCUSSION] KIP-7 Security - IP Filtering

2015-01-21 Thread Jeff Holoman
Posted a KIP for IP Filtering:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+Filtering

Relevant JIRA:
https://issues.apache.org/jira/browse/KAFKA-1810

Appreciate any feedback.

Thanks

Jeff


[jira] [Commented] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-21 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286741#comment-14286741
 ] 

Jeff Holoman commented on KAFKA-1810:
-

The current plan is to rework the configuration portion of this patch once 
KAFKA-1845 is committed (ConfigDef)

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29714: Patch for KAFKA-1810

2015-01-16 Thread Jeff Holoman


 On Jan. 16, 2015, 1:55 p.m., Eric Olander wrote:
  core/src/test/scala/unit/kafka/network/IPFilterTest.scala, line 71
  https://reviews.apache.org/r/29714/diff/2/?file=823240#file823240line71
 
  This is more an FYI as it may go against established practices for 
  tests in this project.  Kafka includes scalatest which has a nice mechanism 
  for this kind of testing:
  
  intercept[IPFilterConfigException] {
new IPFilter(range, ruleType)
  }
  
  That does the same as the try/catch with the fail() in the try block.

Agreed. Based on some research I did when looking into KAFKA-1782 the testing 
guidleines are perhaps a bit in flux. I'm open to change this to whatever makes 
sense.


 On Jan. 16, 2015, 1:55 p.m., Eric Olander wrote:
  core/src/main/scala/kafka/network/IPFilter.scala, line 38
  https://reviews.apache.org/r/29714/diff/2/?file=823235#file823235line38
 
  Pattern matching is unnecessary here.  Instead it is more 
  performant/idiomatic to use the API calls on Option:
  
  if (rgx.findFirstIn(ruleType).isDefined) {
ruleType
  } else {
throw new IPFilterConfigException(Invalid rule type specified:  + 
  ruleType)
  }

Thanks for the Review! Will incorporate this.


- Jeff


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29714/#review68424
---


On Jan. 16, 2015, 12:48 a.m., Jeff Holoman wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29714/
 ---
 
 (Updated Jan. 16, 2015, 12:48 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1810
 https://issues.apache.org/jira/browse/KAFKA-1810
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Put back in the 1512 changes and moved the initial BigInt out of the check 
 CIDRRange contains method
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/IPFilter.scala PRE-CREATION 
   core/src/main/scala/kafka/network/SocketServer.scala 
 39b1651b680b2995cedfde95d74c086d9c6219ef 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 6e26c5436feb4629d17f199011f3ebb674aa767f 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 1691ad7fc80ca0b112f68e3ea0cbab265c75b26b 
   core/src/main/scala/kafka/utils/VerifiableProperties.scala 
 2ffc7f452dc7a1b6a06ca7a509ed49e1ab3d1e68 
   core/src/test/scala/unit/kafka/network/IPFilterTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
 78b431f9c88cca1bc5e430ffd41083d0e92b7e75 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 2377abe4933e065d037828a214c3a87e1773a8ef 
 
 Diff: https://reviews.apache.org/r/29714/diff/
 
 
 Testing
 ---
 
 This code centers around a new class, CIDRRange in IPFilter.scala. The 
 IPFilter class is created and holds two fields, the ruleType 
 (allow|deny|none) and a list of CIDRRange objects. This is used in the Socket 
 Server acceptor thread. The check does an exists on the value in the list if 
 the rule type is allow or deny. On object creation, we pre-calculate the 
 lower and upper range values and store those as a BigInt. The overhead of the 
 check should be fairly minimal as it involves converting the incoming IP 
 Address to a BigInt and then just doing a compare to the low/high values. In 
 writing this review up I realized that I can optimize this further to convert 
 to bigint first and move that conversion out of the range check, which I can 
 address.
 
 Testing covers the CIDRRange and IPFilter classes and validation of IPV6, 
 IPV4, and configurations. Additionally the functionality is tested in 
 SocketServerTest. Other changes are just to assist in configuration.
 
 I modified the SocketServerTest to use a method for grabbing the Socket 
 server to make the code a bit more concise.
 
 One key point is that, if there is an error in configuration, we halt the 
 startup of the broker. The thinking there is that if you screw up 
 security-related configs, you want to know about it right away rather than 
 silently accept connections. (thanks Joe Stein for the input).
 
 There are two new exceptions realted to this functionality, one to handle 
 configuration errors, and one to handle blocking the request. Currently the 
 level is set to INFO. Does it make sense to move this to WARN ?
 
 
 Thanks,
 
 Jeff Holoman
 




[jira] [Updated] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-15 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1810:

Attachment: KAFKA-1810_2015-01-15_19:47:14.patch

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29714: Patch for KAFKA-1810

2015-01-15 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29714/
---

(Updated Jan. 16, 2015, 12:47 a.m.)


Review request for kafka.


Bugs: KAFKA-1810
https://issues.apache.org/jira/browse/KAFKA-1810


Repository: kafka


Description (updated)
---

KAFKA-1810 Refactor


KAFKA-1810 Refactor 2


Diffs (updated)
-

  core/src/main/scala/kafka/network/IPFilter.scala PRE-CREATION 
  core/src/main/scala/kafka/network/SocketServer.scala 
39b1651b680b2995cedfde95d74c086d9c6219ef 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
6e26c5436feb4629d17f199011f3ebb674aa767f 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1691ad7fc80ca0b112f68e3ea0cbab265c75b26b 
  core/src/main/scala/kafka/utils/VerifiableProperties.scala 
2ffc7f452dc7a1b6a06ca7a509ed49e1ab3d1e68 
  core/src/test/scala/unit/kafka/network/IPFilterTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
78b431f9c88cca1bc5e430ffd41083d0e92b7e75 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
2377abe4933e065d037828a214c3a87e1773a8ef 

Diff: https://reviews.apache.org/r/29714/diff/


Testing
---

This code centers around a new class, CIDRRange in IPFilter.scala. The IPFilter 
class is created and holds two fields, the ruleType (allow|deny|none) and a 
list of CIDRRange objects. This is used in the Socket Server acceptor thread. 
The check does an exists on the value in the list if the rule type is allow or 
deny. On object creation, we pre-calculate the lower and upper range values and 
store those as a BigInt. The overhead of the check should be fairly minimal as 
it involves converting the incoming IP Address to a BigInt and then just doing 
a compare to the low/high values. In writing this review up I realized that I 
can optimize this further to convert to bigint first and move that conversion 
out of the range check, which I can address.

Testing covers the CIDRRange and IPFilter classes and validation of IPV6, IPV4, 
and configurations. Additionally the functionality is tested in 
SocketServerTest. Other changes are just to assist in configuration.

I modified the SocketServerTest to use a method for grabbing the Socket server 
to make the code a bit more concise.

One key point is that, if there is an error in configuration, we halt the 
startup of the broker. The thinking there is that if you screw up 
security-related configs, you want to know about it right away rather than 
silently accept connections. (thanks Joe Stein for the input).

There are two new exceptions realted to this functionality, one to handle 
configuration errors, and one to handle blocking the request. Currently the 
level is set to INFO. Does it make sense to move this to WARN ?


Thanks,

Jeff Holoman



[jira] [Commented] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-15 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279609#comment-14279609
 ] 

Jeff Holoman commented on KAFKA-1810:
-

Updated reviewboard https://reviews.apache.org/r/29714/diff/
 against branch origin/trunk

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29714: Patch for KAFKA-1810

2015-01-09 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29714/#review67573
---



core/src/main/scala/kafka/network/SocketServer.scala
https://reviews.apache.org/r/29714/#comment111609

This needs to be addressed as I backed out the previous fix for 1512



core/src/main/scala/kafka/server/KafkaServer.scala
https://reviews.apache.org/r/29714/#comment111610

Also needs to be fixed to include previous fix for 1512


- Jeff Holoman


On Jan. 8, 2015, 7:14 p.m., Jeff Holoman wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29714/
 ---
 
 (Updated Jan. 8, 2015, 7:14 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1810
 https://issues.apache.org/jira/browse/KAFKA-1810
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1810
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/IPFilter.scala PRE-CREATION 
   core/src/main/scala/kafka/network/SocketServer.scala 
 39b1651b680b2995cedfde95d74c086d9c6219ef 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 6e26c5436feb4629d17f199011f3ebb674aa767f 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 1691ad7fc80ca0b112f68e3ea0cbab265c75b26b 
   core/src/main/scala/kafka/utils/VerifiableProperties.scala 
 2ffc7f452dc7a1b6a06ca7a509ed49e1ab3d1e68 
   core/src/test/scala/unit/kafka/network/IPFilterTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
 78b431f9c88cca1bc5e430ffd41083d0e92b7e75 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 2377abe4933e065d037828a214c3a87e1773a8ef 
 
 Diff: https://reviews.apache.org/r/29714/diff/
 
 
 Testing
 ---
 
 This code centers around a new class, CIDRRange in IPFilter.scala. The 
 IPFilter class is created and holds two fields, the ruleType 
 (allow|deny|none) and a list of CIDRRange objects. This is used in the Socket 
 Server acceptor thread. The check does an exists on the value in the list if 
 the rule type is allow or deny. On object creation, we pre-calculate the 
 lower and upper range values and store those as a BigInt. The overhead of the 
 check should be fairly minimal as it involves converting the incoming IP 
 Address to a BigInt and then just doing a compare to the low/high values. In 
 writing this review up I realized that I can optimize this further to convert 
 to bigint first and move that conversion out of the range check, which I can 
 address.
 
 Testing covers the CIDRRange and IPFilter classes and validation of IPV6, 
 IPV4, and configurations. Additionally the functionality is tested in 
 SocketServerTest. Other changes are just to assist in configuration.
 
 I modified the SocketServerTest to use a method for grabbing the Socket 
 server to make the code a bit more concise.
 
 One key point is that, if there is an error in configuration, we halt the 
 startup of the broker. The thinking there is that if you screw up 
 security-related configs, you want to know about it right away rather than 
 silently accept connections. (thanks Joe Stein for the input).
 
 There are two new exceptions realted to this functionality, one to handle 
 configuration errors, and one to handle blocking the request. Currently the 
 level is set to INFO. Does it make sense to move this to WARN ?
 
 
 Thanks,
 
 Jeff Holoman
 




Review Request 29714: Patch for KAFKA-1810

2015-01-08 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29714/
---

Review request for kafka.


Bugs: KAFKA-1810
https://issues.apache.org/jira/browse/KAFKA-1810


Repository: kafka


Description
---

KAFKA-1810


Diffs
-

  core/src/main/scala/kafka/network/IPFilter.scala PRE-CREATION 
  core/src/main/scala/kafka/network/SocketServer.scala 
39b1651b680b2995cedfde95d74c086d9c6219ef 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
6e26c5436feb4629d17f199011f3ebb674aa767f 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1691ad7fc80ca0b112f68e3ea0cbab265c75b26b 
  core/src/main/scala/kafka/utils/VerifiableProperties.scala 
2ffc7f452dc7a1b6a06ca7a509ed49e1ab3d1e68 
  core/src/test/scala/unit/kafka/network/IPFilterTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
78b431f9c88cca1bc5e430ffd41083d0e92b7e75 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
2377abe4933e065d037828a214c3a87e1773a8ef 

Diff: https://reviews.apache.org/r/29714/diff/


Testing
---


Thanks,

Jeff Holoman



[jira] [Updated] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-08 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1810:

Attachment: KAFKA-1810.patch

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-08 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1810:

Status: Patch Available  (was: Open)

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-08 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269567#comment-14269567
 ] 

Jeff Holoman commented on KAFKA-1810:
-

Created reviewboard https://reviews.apache.org/r/29714/diff/
 against branch origin/trunk

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-08 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269579#comment-14269579
 ] 

Jeff Holoman commented on KAFKA-1810:
-

This patch is a first pass at implementing IP Filtering logic. It requires 
defining two additional properties: 
security.ip.filter.rule.type
security.ip.filter.list. 

The list of IP's are specified in CIDR notation: 
http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing. This 
implementation supports whitelisting (allow config value) or blacklisting 
(deny), mutually exclusive. The parameters are passed to socket server and 
validated upon startup.(I'd like to move most of the validation logic per 
KAFKA-1845). An exception is thrown and the server shutdown in the event of 
misconfiguration of these parameters. The check against the list is in the 
Acceptor thread, and if the rule check fails, the socket is closed. There are a 
lot of tests included in the patch but if there are suggestions for more please 
let me know.

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29714: Patch for KAFKA-1810

2015-01-08 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29714/
---

(Updated Jan. 8, 2015, 7:14 p.m.)


Review request for kafka.


Bugs: KAFKA-1810
https://issues.apache.org/jira/browse/KAFKA-1810


Repository: kafka


Description
---

KAFKA-1810


Diffs
-

  core/src/main/scala/kafka/network/IPFilter.scala PRE-CREATION 
  core/src/main/scala/kafka/network/SocketServer.scala 
39b1651b680b2995cedfde95d74c086d9c6219ef 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
6e26c5436feb4629d17f199011f3ebb674aa767f 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1691ad7fc80ca0b112f68e3ea0cbab265c75b26b 
  core/src/main/scala/kafka/utils/VerifiableProperties.scala 
2ffc7f452dc7a1b6a06ca7a509ed49e1ab3d1e68 
  core/src/test/scala/unit/kafka/network/IPFilterTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
78b431f9c88cca1bc5e430ffd41083d0e92b7e75 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
2377abe4933e065d037828a214c3a87e1773a8ef 

Diff: https://reviews.apache.org/r/29714/diff/


Testing (updated)
---

This code centers around a new class, CIDRRange in IPFilter.scala. The IPFilter 
class is created and holds two fields, the ruleType (allow|deny|none) and a 
list of CIDRRange objects. This is used in the Socket Server acceptor thread. 
The check does an exists on the value in the list if the rule type is allow or 
deny. On object creation, we pre-calculate the lower and upper range values and 
store those as a BigInt. The overhead of the check should be fairly minimal as 
it involves converting the incoming IP Address to a BigInt and then just doing 
a compare to the low/high values. In writing this review up I realized that I 
can optimize this further to convert to bigint first and move that conversion 
out of the range check, which I can address.

Testing covers the CIDRRange and IPFilter classes and validation of IPV6, IPV4, 
and configurations. Additionally the functionality is tested in 
SocketServerTest. Other changes are just to assist in configuration.

I modified the SocketServerTest to use a method for grabbing the Socket server 
to make the code a bit more concise.

One key point is that, if there is an error in configuration, we halt the 
startup of the broker. The thinking there is that if you screw up 
security-related configs, you want to know about it right away rather than 
silently accept connections. (thanks Joe Stein for the input).

There are two new exceptions realted to this functionality, one to handle 
configuration errors, and one to handle blocking the request. Currently the 
level is set to INFO. Does it make sense to move this to WARN ?


Thanks,

Jeff Holoman



Re: Review Request 29301: Patch for KAFKA-1694

2015-01-06 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29301/#review66838
---



tools/src/main/java/org/apache/kafka/cli/command/AlterTopicCommand.java
https://reviews.apache.org/r/29301/#comment110502

minor nit: grammar. The topic to be created, altered, or described


- Jeff Holoman


On Dec. 24, 2014, 7:22 p.m., Andrii Biletskyi wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29301/
 ---
 
 (Updated Dec. 24, 2014, 7:22 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1694
 https://issues.apache.org/jira/browse/KAFKA-1694
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1694 - introduced new type for Wire protocol, ported 
 ClusterMetadataResponse to it
 
 
 KAFKA-1694 - Split Admin RQ/RP to separate messages
 
 
 KAFKA-1694 - Admin commands can be handled only by controller; 
 DeleteTopicCommand NPE fix
 
 
 Diffs
 -
 
   bin/kafka.sh PRE-CREATION 
   bin/windows/kafka.bat PRE-CREATION 
   build.gradle 18f86e4c8a10618d50ac78572d119c6e100ed85b 
   clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java 
 109fc965e09b2ed186a073351bd037ac8af20a4c 
   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
   clients/src/main/java/org/apache/kafka/common/protocol/types/MaybeOf.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
 121e880a941fcd3e6392859edba11a94236494cc 
   
 clients/src/main/java/org/apache/kafka/common/requests/ClusterMetadataRequest.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/ClusterMetadataResponse.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicOutput.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicRequest.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicResponse.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsOutput.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsRequest.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsResponse.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/TopicConfigDetails.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/requests/admin/TopicPartitionsDetails.java
  PRE-CREATION 
   
 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
  df37fc6d8f0db0b8192a948426af603be3444da4 
   core/src/main/scala/kafka/api/ApiUtils.scala 
 1f80de1638978901500df808ca5133308c9d1fca 
   core/src/main/scala/kafka/api/ClusterMetadataRequest.scala PRE-CREATION 
   core/src/main/scala/kafka/api/ClusterMetadataResponse.scala PRE-CREATION 
   core/src/main/scala/kafka/api/RequestKeys.scala 
 c24c0345feedc7b9e2e9f40af11bfa1b8d328c43 
   core/src/main/scala/kafka/api/admin/AlterTopicRequest.scala PRE-CREATION 
   core/src/main/scala/kafka/api/admin/AlterTopicResponse.scala PRE-CREATION 
   core/src/main/scala/kafka/api/admin/CreateTopicRequest.scala PRE-CREATION 
   core/src/main/scala/kafka/api/admin/CreateTopicResponse.scala PRE-CREATION 
   core/src/main/scala/kafka/api/admin/DeleteTopicRequest.scala PRE-CREATION 
   core/src/main/scala/kafka/api/admin/DeleteTopicResponse.scala PRE-CREATION 
   core/src/main/scala/kafka/api/admin/DescribeTopicRequest.scala PRE-CREATION 
   core/src/main/scala/kafka/api/admin/DescribeTopicResponse.scala 
 PRE-CREATION 
   core/src/main/scala/kafka/api/admin/ListTopicsRequest.scala PRE-CREATION 
   core/src/main/scala/kafka/api/admin/ListTopicsResponse.scala PRE-CREATION 
   core/src/main/scala/kafka/common/AdminRequestFailedException.scala 
 PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 eedc2f5f21dd8755fba891998456351622e17047 
   core/src/main/scala/kafka/common/InvalidRequestTargetException.scala 
 PRE-CREATION 
   core/src

[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2015-01-05 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: KAFKA-1512-082.patch

Patch against 0.8.2

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Affects Versions: 0.8.2
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512-082.patch, KAFKA-1512.patch, 
 KAFKA-1512.patch, KAFKA-1512_2014-07-03_15:17:55.patch, 
 KAFKA-1512_2014-07-14_13:28:15.patch, KAFKA-1512_2014-12-23_21:47:23.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1782) Junit3 Misusage

2014-12-27 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259533#comment-14259533
 ] 

Jeff Holoman commented on KAFKA-1782:
-

I did a check through the tests looking for things like '(expected' and 
'JUnit3Suite'. The good news is I don't think there are any cases that tests 
are passing where they shouldn't be, and I didn't find any instances against 
trunk where a test would pass silently due features that aren't implemented in 
JUnit 3. There is one exception (HighwatermarkPersistenceTest) where the 
teardown is not being called due to use of the  @After notation. There is also 
a bit of mixing where both JUnitSuite and the older junit.framework.Assert 
(vs. org.junit.Assert) is being used. 

So how would we like to proceed here? It probably makes sense to have a 
standard set of libraries that are imported in each test. 
Are we ok with using scalatest features like intercept[] rather than 
@Test(expected..) ?. If we remove all the references to JUnit3Suite there is 
some cleanup work (mostly in setup/teardwon and adding annotations). 


 Junit3 Misusage
 ---

 Key: KAFKA-1782
 URL: https://issues.apache.org/jira/browse/KAFKA-1782
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Jeff Holoman
  Labels: newbie
 Fix For: 0.8.2


 This is found while I was working on KAFKA-1580: in many of our cases where 
 we explicitly extend from junit3suite (e.g. ProducerFailureHandlingTest), we 
 are actually misusing a bunch of features that only exist in Junit4, such as 
 (expected=classOf). For example, the following code
 {code}
 import org.scalatest.junit.JUnit3Suite
 import org.junit.Test
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will actually pass even though IOException was not thrown since this 
 annotation is not supported in Junit3. Whereas
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.scalatest.junit.JUnitSuite
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will fail.
 I would propose to not rely on Junit annotations other than @Test itself but 
 use scala unit test annotations instead, for example:
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test
   def testSendOffset() {
 intercept[IOException] {
   //nothing
 }
   }
 }
 {code}
 will fail with a clearer stacktrace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1782) Junit3 Misusage

2014-12-27 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259533#comment-14259533
 ] 

Jeff Holoman edited comment on KAFKA-1782 at 12/28/14 12:26 AM:


I did a check through the tests looking for things like '(expected' and 
'JUnit3Suite'. The good news is I don't think there are any cases that tests 
are passing where they shouldn't be, and I didn't find any instances against 
trunk where a test would pass silently due features that aren't implemented in 
JUnit 3. There is one exception (HighwatermarkPersistenceTest) where the 
teardown is not being called due to use of the  @After notation. There is also 
a bit of mixing where both JUnitSuite and the older junit.framework.Assert 
(vs. org.junit.Assert) is being used. 

So how would we like to proceed here? It probably makes sense to have a 
standard set of libraries that are imported in each test. 
Are we ok with using scalatest features like intercept[] rather than 
@Test(expected..) ?. If we remove all the references to JUnit3Suite there is 
some cleanup work (mostly in setup/teardown and adding annotations). 



was (Author: jholoman):
I did a check through the tests looking for things like '(expected' and 
'JUnit3Suite'. The good news is I don't think there are any cases that tests 
are passing where they shouldn't be, and I didn't find any instances against 
trunk where a test would pass silently due features that aren't implemented in 
JUnit 3. There is one exception (HighwatermarkPersistenceTest) where the 
teardown is not being called due to use of the  @After notation. There is also 
a bit of mixing where both JUnitSuite and the older junit.framework.Assert 
(vs. org.junit.Assert) is being used. 

So how would we like to proceed here? It probably makes sense to have a 
standard set of libraries that are imported in each test. 
Are we ok with using scalatest features like intercept[] rather than 
@Test(expected..) ?. If we remove all the references to JUnit3Suite there is 
some cleanup work (mostly in setup/teardwon and adding annotations). 


 Junit3 Misusage
 ---

 Key: KAFKA-1782
 URL: https://issues.apache.org/jira/browse/KAFKA-1782
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Jeff Holoman
  Labels: newbie
 Fix For: 0.8.2


 This is found while I was working on KAFKA-1580: in many of our cases where 
 we explicitly extend from junit3suite (e.g. ProducerFailureHandlingTest), we 
 are actually misusing a bunch of features that only exist in Junit4, such as 
 (expected=classOf). For example, the following code
 {code}
 import org.scalatest.junit.JUnit3Suite
 import org.junit.Test
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will actually pass even though IOException was not thrown since this 
 annotation is not supported in Junit3. Whereas
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.scalatest.junit.JUnitSuite
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will fail.
 I would propose to not rely on Junit annotations other than @Test itself but 
 use scala unit test annotations instead, for example:
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test
   def testSendOffset() {
 intercept[IOException] {
   //nothing
 }
   }
 }
 {code}
 will fail with a clearer stacktrace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1782) Junit3 Misusage

2014-12-25 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14258803#comment-14258803
 ] 

Jeff Holoman commented on KAFKA-1782:
-

Given the timing, is it best to move the fix version to 0.8.3?

 Junit3 Misusage
 ---

 Key: KAFKA-1782
 URL: https://issues.apache.org/jira/browse/KAFKA-1782
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Jeff Holoman
  Labels: newbie
 Fix For: 0.8.2


 This is found while I was working on KAFKA-1580: in many of our cases where 
 we explicitly extend from junit3suite (e.g. ProducerFailureHandlingTest), we 
 are actually misusing a bunch of features that only exist in Junit4, such as 
 (expected=classOf). For example, the following code
 {code}
 import org.scalatest.junit.JUnit3Suite
 import org.junit.Test
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will actually pass even though IOException was not thrown since this 
 annotation is not supported in Junit3. Whereas
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.scalatest.junit.JUnitSuite
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will fail.
 I would propose to not rely on Junit annotations other than @Test itself but 
 use scala unit test annotations instead, for example:
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test
   def testSendOffset() {
 intercept[IOException] {
   //nothing
 }
   }
 }
 {code}
 will fail with a clearer stacktrace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29029: Patch for KAFKA-1512

2014-12-23 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29029/
---

(Updated Dec. 24, 2014, 2:24 a.m.)


Review request for kafka.


Bugs: KAFKA-1512
https://issues.apache.org/jira/browse/KAFKA-1512


Repository: kafka


Description (updated)
---

KAFKA-1512 wire in Override configuration

KAFKA-1512 wire in overrides

KAFKA-1512 test mods


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
e451592fe358158548117f47a80e807007dd8b98 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
5f4d85254c384dcc27a5a84f0836ea225d3a901a 

Diff: https://reviews.apache.org/r/29029/diff/


Testing
---


Thanks,

Jeff Holoman



[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14257846#comment-14257846
 ] 

Jeff Holoman commented on KAFKA-1512:
-

Updated reviewboard https://reviews.apache.org/r/29029/diff/
 against branch origin/trunk

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-14_21:16:55.patch, KAFKA-1512_2014-12-14_21:30:27.patch, 
 KAFKA-1512_2014-12-23_21:24:28.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: KAFKA-1512_2014-12-23_21:24:28.patch

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-14_21:16:55.patch, KAFKA-1512_2014-12-14_21:30:27.patch, 
 KAFKA-1512_2014-12-23_21:24:28.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: (was: KAFKA-1512.patch)

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-23_21:24:28.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: (was: KAFKA-1512_2014-12-14_21:16:55.patch)

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-23_21:24:28.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: (was: KAFKA-1512_2014-12-14_21:30:27.patch)

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-23_21:24:28.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29029: Patch for KAFKA-1512

2014-12-23 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29029/
---

(Updated Dec. 24, 2014, 2:47 a.m.)


Review request for kafka.


Bugs: KAFKA-1512
https://issues.apache.org/jira/browse/KAFKA-1512


Repository: kafka


Description (updated)
---

KAFKA-1512 wire in Override configuration

KAFKA-1512 wire in overrides

KAFKA-1512 test mods

KAFKA-1512 IP Overrides


KAFKA-1512 IP Overrides


Merge branch 'trunk' of https://github.com/apache/kafka into 1512-updates


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
e451592fe358158548117f47a80e807007dd8b98 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
5f4d85254c384dcc27a5a84f0836ea225d3a901a 

Diff: https://reviews.apache.org/r/29029/diff/


Testing
---


Thanks,

Jeff Holoman



[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: KAFKA-1512_2014-12-23_21:47:23.patch

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-23_21:24:28.patch, KAFKA-1512_2014-12-23_21:47:23.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14257864#comment-14257864
 ] 

Jeff Holoman commented on KAFKA-1512:
-

Updated reviewboard https://reviews.apache.org/r/29029/diff/
 against branch origin/trunk

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-23_21:24:28.patch, KAFKA-1512_2014-12-23_21:47:23.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: (was: KAFKA-1512_2014-12-23_21:24:28.patch)

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-23_21:47:23.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-23 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14257869#comment-14257869
 ] 

Jeff Holoman commented on KAFKA-1512:
-

I modified the two tests related to max connections in the  SocketServerTest 
class a bit. Basically I'm just checking if the socket inpustream is still 
open, rather than sending a request through. I found in testing that isolating 
the tests to only test socketServer functionality was a bit hard and that 
sending real requests would be beneficial. I did not modify the other tests in 
SocketServerTest, though I think that utilizing the ClusterMetadataRequest in 
the tests rather than bogus and / or empty producer requests may be worth 
looking into. 

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-23_21:47:23.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29029: Patch for KAFKA-1512

2014-12-15 Thread Jeff Holoman


 On Dec. 15, 2014, 7:58 p.m., Gwen Shapira wrote:
  Jeff,
  
  There's some strange things going on there, I think we need a bit more 
  testing and maybe more implementation :)
  
  1. Please make sure the test fails without the override fix. When I tried, 
  it passed on trunk... this means we are testing the wrong thing.
  
  2. Funny fact: connect() does not actually trigger the Quota mechanism, it 
  is only triggered when you send a request. You can see that by putting a 
  breakpoint in ConnectionQuotas.inc and see where it is called. Since you 
  are only sending data after creating the last connection, even without 
  the override you'll be able to create the first 5 connections and only get 
  the error after the 6 one and the send request... this is probably why 
  the test works with and without the override.
  
  I'm not sure, but this may be a bug in the original maxIP implementation - 
  since I can actually create gazillion connections as long as I don't send 
  anything. I'm not sure if Kafka could run out of resources in this case. 
  Perhaps check with Jay in the JIRA? He probably thought about this.
  
  3. Not sure, but perhaps we need to call fail() explicitly to make sure the 
  test fails if we successfully openned the last connection and sent data?
  
  4. Another funny fact: ((0 until overrideNum).map(i = connect())) creates 
  6 connections, not 5
  
  5. We need to make sure the overrides map is propagated all the way to 
  the ConnectionQuotas code. I don't think it does that at the moment, even 
  after you fixed the SocketServer() call.
  
  Thanks again for your work here, and sorry it got slightly more complex 
  than expected.

Gwen thanks for thew review. I will take a thorough look through the code and 
double check the tests.


- Jeff


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29029/#review65114
---


On Dec. 15, 2014, 2:30 a.m., Jeff Holoman wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29029/
 ---
 
 (Updated Dec. 15, 2014, 2:30 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1512
 https://issues.apache.org/jira/browse/KAFKA-1512
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1512 wire in Override configuration
 
 
 KAFKA-1512 wire in overrides
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 e451592fe358158548117f47a80e807007dd8b98 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
   core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
 5f4d85254c384dcc27a5a84f0836ea225d3a901a 
 
 Diff: https://reviews.apache.org/r/29029/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jeff Holoman
 




[jira] [Comment Edited] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2014-12-14 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240484#comment-14240484
 ] 

Jeff Holoman edited comment on KAFKA-1810 at 12/14/14 3:08 PM:
---

[~bosco] ,

I did note the broader security-related JIRA and have been following it 
somewhat closely. I'm keenly interested in this topic generally. I think this 
proposal makes sense for a number of reasons

1) While this feature is certainly related to security its intent is really 
more to insulate against configuration issues and give software administrators 
the ability manage incoming connections by network range. For example, make 
sure my QA isn't writing to Prod. 
2) This is a relatively simple change that doesn't require dependencies on a 
much larger security initiative.
3) There's no reason why this feature couldn't be worked into the upcoming 
permissions manager. 
4) The configuration is very minor, a list / range of IP addresses to allow/deny

Essentially this would cover the following scenarios
1) Allow any incoming connections but deny certain IP blocks
2) The inverse of the above
3) Some combination of the two, where certain subnets were allowed/denied, e.g. 
allow all traffic from 192.168.2.0/12 but deny 192.168.2.0/28.

This concept is probably most related to IPTables or AWS security groups… What 
I mean here, we have other more sophisticated methods of authorizing access in 
other systems but the ability to filter simple network requests from a given IP 
range I think provides a valuable tool in the administrator's kit. As such I 
think this would provide this particular capability to restrict access in the 
near term until a more robust implementation is completed, or in conjunction 
with that initiative.  As far as I know, the rollout for 1688 is TBD. 


was (Author: jholoman):
[~bosco] ,

I did note the broader security-related JIRA and have been following it 
somewhat closely. I'm keenly interested in this topic generally. I think this 
proposal makes sense for a number of reasons

1) While this feature is certainly related to security it's intent is really 
more to insulate against configuration issues and give software administrators 
the ability manage incoming connections by network range. For example, make 
sure my QA isn't writing to Prod. 
2) This is a relatively simple change that doesn't require dependencies on a 
much larger security initiative.
3) There's no reason why this feature couldn't be worked into the upcoming 
permissions manager. 
4) The configuration is very minor, a list / range of IP addresses to allow/deny

Essentially this would cover the following scenarios
1) Allow any incoming connections but deny certain IP blocks
2) The inverse of the above
3) Some combination of the two, where certain subnets were allowed/denied, e.g. 
allow all traffic from 192.168.2.0/12 but deny 192.168.2.0/28.

This concept is probably most related to IPTables or AWS security groups… What 
I mean here, we have other more sophisticated methods of authorizing access in 
other systems but the ability to filter simple network requests from a given IP 
range I think provides a valuable tool in the administrator's kit. As such I 
think this would provide this particular capability to restrict access in the 
near term until a more robust implementation is completed, or in conjunction 
with that initiative.  As far as I know, the rollout for 1688 is TBD. 

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian

[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: KAFKA-1512.patch

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Status: Patch Available  (was: Reopened)

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 29029: KAFKA-1512 wire in Override configuration

2014-12-14 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29029/
---

Review request for kafka.


Bugs: KAFKA-1512
https://issues.apache.org/jira/browse/KAFKA-1512


Repository: kafka


Description
---

KAFKA-1512 write in overrides per previous patch


Diffs
-

  core/src/main/scala/kafka/network/SocketServer.scala 
e451592fe358158548117f47a80e807007dd8b98 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
5f4d85254c384dcc27a5a84f0836ea225d3a901a 

Diff: https://reviews.apache.org/r/29029/diff/


Testing
---


Thanks,

Jeff Holoman



[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246003#comment-14246003
 ] 

Jeff Holoman commented on KAFKA-1512:
-

Created reviewboard https://reviews.apache.org/r/29029/diff/
 against branch origin/trunk

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29029: Patch for KAFKA-1512

2014-12-14 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29029/
---

(Updated Dec. 15, 2014, 2:16 a.m.)


Review request for kafka.


Summary (updated)
-

Patch for KAFKA-1512


Bugs: KAFKA-1512
https://issues.apache.org/jira/browse/KAFKA-1512


Repository: kafka


Description (updated)
---

KAFKA-1512 wire in Override configuration


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
e451592fe358158548117f47a80e807007dd8b98 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
5f4d85254c384dcc27a5a84f0836ea225d3a901a 

Diff: https://reviews.apache.org/r/29029/diff/


Testing
---


Thanks,

Jeff Holoman



[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246237#comment-14246237
 ] 

Jeff Holoman commented on KAFKA-1512:
-

Updated reviewboard https://reviews.apache.org/r/29029/diff/
 against branch origin/trunk

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-14_21:16:55.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: KAFKA-1512_2014-12-14_21:16:55.patch

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-14_21:16:55.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29029: Patch for KAFKA-1512

2014-12-14 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29029/
---

(Updated Dec. 15, 2014, 2:21 a.m.)


Review request for kafka.


Bugs: KAFKA-1512
https://issues.apache.org/jira/browse/KAFKA-1512


Repository: kafka


Description
---

KAFKA-1512 wire in Override configuration


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
e451592fe358158548117f47a80e807007dd8b98 
  core/src/main/scala/kafka/server/KafkaServer.scala 
1bf7d10cef23a77e71eb16bf6d0e68bc4ebe 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
5f4d85254c384dcc27a5a84f0836ea225d3a901a 

Diff: https://reviews.apache.org/r/29029/diff/


Testing
---


Thanks,

Jeff Holoman



[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: KAFKA-1512_2014-12-14_21:21:37.patch

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-14_21:16:55.patch, KAFKA-1512_2014-12-14_21:21:37.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246242#comment-14246242
 ] 

Jeff Holoman commented on KAFKA-1512:
-

Updated reviewboard https://reviews.apache.org/r/29029/diff/
 against branch origin/trunk

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-14_21:16:55.patch, KAFKA-1512_2014-12-14_21:21:37.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1512:

Attachment: KAFKA-1512_2014-12-14_21:30:27.patch

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-14_21:16:55.patch, KAFKA-1512_2014-12-14_21:30:27.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-14 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14246250#comment-14246250
 ] 

Jeff Holoman commented on KAFKA-1512:
-

Updated reviewboard https://reviews.apache.org/r/29029/diff/
 against branch origin/trunk

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
 KAFKA-1512_2014-12-14_21:16:55.patch, KAFKA-1512_2014-12-14_21:30:27.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-12 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman reassigned KAFKA-1512:
---

Assignee: Jeff Holoman

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jeff Holoman
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-12 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14244729#comment-14244729
 ] 

Jeff Holoman commented on KAFKA-1512:
-

[~jkreps] I can fix this if you're ok with that.

 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-12-11 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14243430#comment-14243430
 ] 

Jeff Holoman commented on KAFKA-1512:
-

[~jkreps], I noticed that the overrides are not fully implemented in this 
patch. Was the intent to leave that feature out? Based on your last comment I 
can see why but wanted to confirm. What are your thoughts on the override 
functionality now?



 Limit the maximum number of connections per ip address
 --

 Key: KAFKA-1512
 URL: https://issues.apache.org/jira/browse/KAFKA-1512
 Project: Kafka
  Issue Type: New Feature
Reporter: Jay Kreps
Assignee: Jay Kreps
 Fix For: 0.8.2

 Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
 KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch


 To protect against client connection leaks add a new configuration
   max.connections.per.ip
 that causes the SocketServer to enforce a limit on the maximum number of 
 connections from each InetAddress instance. For backwards compatibility this 
 will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28859: Patch for KAFKA-1812

2014-12-10 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28859/
---

(Updated Dec. 11, 2014, 2:39 a.m.)


Review request for kafka.


Bugs: KAFKA-1812
https://issues.apache.org/jira/browse/KAFKA-1812


Repository: kafka


Description (updated)
---

KAFKA-1812 Updated based on rb comments


Diffs (updated)
-

  core/src/main/scala/kafka/utils/Utils.scala 
58685cc47b4c43e4ee68b73f1ee34eb99a5aa547 
  core/src/test/scala/unit/kafka/utils/UtilsTest.scala 
0d0f0e2fba367180eeb718a259e8d680a73c3a73 

Diff: https://reviews.apache.org/r/28859/diff/


Testing
---


Thanks,

Jeff Holoman



  1   2   >