Re: [DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-18 Thread Harsha
Thanks. I am also leaning towards option 2, as it will help the consistency of 
expressing such mapping between sasl and ssl.
-Harsha

On Tue, Sep 18, 2018, at 8:27 PM, Manikumar wrote:
> Hi Harsha,
> 
> Thanks for the review. Yes, As mentioned on the motivation section, this is
> to simply extracting fields from the certificates
> for the common use cases. Yes, we are not supporting extracting
> SubjectAltName using this KIP.
> 
> Thanks,
> 
> 
> On Wed, Sep 19, 2018 at 8:29 AM Harsha  wrote:
> 
> > Hi Manikumar,
> > I am interested to know the reason for exposing this config, given
> > a user has access to PrincipalBuilder interface to build their
> > interpretation of an identity from the X509 certificates. Is this to
> > simplify extraction of identity? and also there are other use cases where
> > user's will extract SubjectAltName to construct the identity I guess thats
> > not going to supported by this method.
> >
> > Thanks,
> > Harsha
> >
> > On Tue, Sep 18, 2018, at 8:25 AM, Manikumar wrote:
> > > Hi Rajini,
> > >
> > > I don't have strong reasons for rejecting Option 2. I just felt Option 1
> > is
> > > sufficient for
> > > the common use-cases (extracting single field, like CN etc..).
> > >
> > > We are open to go with Option 2, for more flexible mapping mechanism.
> > > Let us know, your preference.
> > >
> > > Thanks,
> > >
> > >
> > > On Tue, Sep 18, 2018 at 8:05 PM Rajini Sivaram 
> > > wrote:
> > >
> > > > Hi Manikumar,
> > > >
> > > > It wasn't entirely clear to me why Option 2 was rejected.
> > > >
> > > > On Tue, Sep 18, 2018 at 7:47 AM, Manikumar 
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > We would like to go with Option 1, which adds a new configuration
> > > > parameter
> > > > > pair of the form:
> > > > > ssl.principal.mapping.pattern, ssl.principal.mapping.value. This will
> > > > > fulfill the requirement for most of the common use cases.
> > > > >
> > > > > We would like to include the KIP in the upcoming release. If there no
> > > > > concerns, would like to start vote on this KIP.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > On Fri, Sep 14, 2018 at 11:32 PM Priyank Shah  > >
> > > > > wrote:
> > > > >
> > > > > > Definitely a helpful change. +1 for Option 2.
> > > > > >
> > > > > > On 9/14/18, 10:52 AM, "Manikumar" 
> > wrote:
> > > > > >
> > > > > > Hi Eno,
> > > > > >
> > > > > > Thanks for the review.
> > > > > >
> > > > > > Most often users want to extract one of the field (eg. CN). CN
> > is
> > > > the
> > > > > > commonly used field.
> > > > > > For this simple change, users need to build and maintain the
> > custom
> > > > > > principal builder class
> > > > > > and also package and deploy to the all brokers. Having
> > configurable
> > > > > > rules
> > > > > > will be useful.
> > > > > >
> > > > > > Proposed mapping rules works on string representation of the
> > X.500
> > > > > > distinguished name(RFC2253 format) [1].
> > > > > > Mapping rules can use the attribute types keywords defined in
> > RFC
> > > > > 2253
> > > > > > (CN,
> > > > > > L, ST, O, OU, C, STREET, DC, UID).
> > > > > >
> > > > > > Any additional/custom attribute types are emitted as OIDs. To
> > emit
> > > > > > additional attribute type keys,
> > > > > > we need to have OID -> attribute type keyword String
> > mapping.[2]
> > > > > >
> > > > > > For example, String representation of X500Principal("CN=Duke,
> > > > > > OU=JavaSoft,
> > > > > > O=Sun Microsystems, C=US, EMAILADDRESS=t...@test.com")
> > > > > > will be "CN=Duke,OU=JavaSoft,O=Sun
> > > > > > Microsystems,C=US,1.2.840.113549.1.9.1=#
> > > > > 160d7465737440746573742e636f6d"
> > > > > >
> > > > > > If we have the OID - key mapping ("1.2.840.113549.1.9.1",
> > > > > > "emailAddress"),
> > > > > > the string will be
> > > > > > "CN=Duke,OU=JavaSoft,O=Sun Microsystems,C=US,emailAddress=
> > > > > > t...@test.com"
> > > > > >
> > > > > > Since we are not passing this mapping, we can not extarct using
> > > > > > additional
> > > > > > attribute type keyword string.
> > > > > > If the user want to extract additional attribute keys, we need
> > to
> > > > > write
> > > > > > custom principal builder class.
> > > > > >
> > > > > > Hope the above helps. Update the KIP.
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > >
> > https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> > > > > X500Principal.html#getName(java.lang.String)
> > > > > >
> > > > > > [2]
> > > > > >
> > > > > >
> > https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> > > > > X500Principal.html#getName(java.lang.String,%20java.util.Map)
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > On Fri, Sep 14, 2018 at 7:44 PM Eno Thereska <
> > > > eno.there...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Manikumar, thanks. If I understand the KIP motivation right,
> > 

Request for contributor permissions

2018-09-18 Thread DANIEL HUANG
as title

JIRA ID: doniel0...@gmail.com
Cwiki ID: DANIEL HUANG

Thanks!


confirm subscribe to dev@kafka.apache.org

2018-09-18 Thread 黃旭威
JIRA ID: doniel0...@gmail.com
Cwiki ID: DANIEL HUANG


Re: [DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-18 Thread Manikumar
Hi Harsha,

Thanks for the review. Yes, As mentioned on the motivation section, this is
to simply extracting fields from the certificates
for the common use cases. Yes, we are not supporting extracting
SubjectAltName using this KIP.

Thanks,


On Wed, Sep 19, 2018 at 8:29 AM Harsha  wrote:

> Hi Manikumar,
> I am interested to know the reason for exposing this config, given
> a user has access to PrincipalBuilder interface to build their
> interpretation of an identity from the X509 certificates. Is this to
> simplify extraction of identity? and also there are other use cases where
> user's will extract SubjectAltName to construct the identity I guess thats
> not going to supported by this method.
>
> Thanks,
> Harsha
>
> On Tue, Sep 18, 2018, at 8:25 AM, Manikumar wrote:
> > Hi Rajini,
> >
> > I don't have strong reasons for rejecting Option 2. I just felt Option 1
> is
> > sufficient for
> > the common use-cases (extracting single field, like CN etc..).
> >
> > We are open to go with Option 2, for more flexible mapping mechanism.
> > Let us know, your preference.
> >
> > Thanks,
> >
> >
> > On Tue, Sep 18, 2018 at 8:05 PM Rajini Sivaram 
> > wrote:
> >
> > > Hi Manikumar,
> > >
> > > It wasn't entirely clear to me why Option 2 was rejected.
> > >
> > > On Tue, Sep 18, 2018 at 7:47 AM, Manikumar 
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > We would like to go with Option 1, which adds a new configuration
> > > parameter
> > > > pair of the form:
> > > > ssl.principal.mapping.pattern, ssl.principal.mapping.value. This will
> > > > fulfill the requirement for most of the common use cases.
> > > >
> > > > We would like to include the KIP in the upcoming release. If there no
> > > > concerns, would like to start vote on this KIP.
> > > >
> > > > Thanks,
> > > >
> > > > On Fri, Sep 14, 2018 at 11:32 PM Priyank Shah  >
> > > > wrote:
> > > >
> > > > > Definitely a helpful change. +1 for Option 2.
> > > > >
> > > > > On 9/14/18, 10:52 AM, "Manikumar" 
> wrote:
> > > > >
> > > > > Hi Eno,
> > > > >
> > > > > Thanks for the review.
> > > > >
> > > > > Most often users want to extract one of the field (eg. CN). CN
> is
> > > the
> > > > > commonly used field.
> > > > > For this simple change, users need to build and maintain the
> custom
> > > > > principal builder class
> > > > > and also package and deploy to the all brokers. Having
> configurable
> > > > > rules
> > > > > will be useful.
> > > > >
> > > > > Proposed mapping rules works on string representation of the
> X.500
> > > > > distinguished name(RFC2253 format) [1].
> > > > > Mapping rules can use the attribute types keywords defined in
> RFC
> > > > 2253
> > > > > (CN,
> > > > > L, ST, O, OU, C, STREET, DC, UID).
> > > > >
> > > > > Any additional/custom attribute types are emitted as OIDs. To
> emit
> > > > > additional attribute type keys,
> > > > > we need to have OID -> attribute type keyword String
> mapping.[2]
> > > > >
> > > > > For example, String representation of X500Principal("CN=Duke,
> > > > > OU=JavaSoft,
> > > > > O=Sun Microsystems, C=US, EMAILADDRESS=t...@test.com")
> > > > > will be "CN=Duke,OU=JavaSoft,O=Sun
> > > > > Microsystems,C=US,1.2.840.113549.1.9.1=#
> > > > 160d7465737440746573742e636f6d"
> > > > >
> > > > > If we have the OID - key mapping ("1.2.840.113549.1.9.1",
> > > > > "emailAddress"),
> > > > > the string will be
> > > > > "CN=Duke,OU=JavaSoft,O=Sun Microsystems,C=US,emailAddress=
> > > > > t...@test.com"
> > > > >
> > > > > Since we are not passing this mapping, we can not extarct using
> > > > > additional
> > > > > attribute type keyword string.
> > > > > If the user want to extract additional attribute keys, we need
> to
> > > > write
> > > > > custom principal builder class.
> > > > >
> > > > > Hope the above helps. Update the KIP.
> > > > >
> > > > > [1]
> > > > >
> > > > >
> https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> > > > X500Principal.html#getName(java.lang.String)
> > > > >
> > > > > [2]
> > > > >
> > > > >
> https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> > > > X500Principal.html#getName(java.lang.String,%20java.util.Map)
> > > > >
> > > > > Thanks
> > > > >
> > > > > On Fri, Sep 14, 2018 at 7:44 PM Eno Thereska <
> > > eno.there...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Manikumar, thanks. If I understand the KIP motivation right,
> this
> > > > is
> > > > > > currently already possible, but you want to have an easier
> way of
> > > > > doing it,
> > > > > > right? The motivation would be stronger if we had 2-3 common
> > > cases
> > > > > (from
> > > > > > experience) where the suggested pattern would solve them, and
> > > > > perhaps 1-2
> > > > > > cases where the pattern would not be adequate and we'd have
> to
> > > fall
> > > > > back to
> > > > > > the existing builder 

Re: A question about kafka streams API

2018-09-18 Thread John Roesler
Hey Yui,

Sorry, I haven't had a chance to respond. I've got a pretty busy couple of
weeks coming up, so I don't know when I'll look at this, but I find this
puzzling. I'll save your email and try what you said to see if I can figure
it out. Thanks for the repro code.

Let me know if you figure it out. Also, if you think you've found a bug,
feel free to file a jira ticket as well. It might get broader visibility
that way.

Thanks,
-John

On Thu, Sep 13, 2018 at 1:57 AM Yui Yoi  wrote:

> Hi Adam and John, thank you for your effort!
> We are implementing full idem-potency in our projects so that's nothing to
> worry about.
> As to what John said - we only have one partition, I personally assured
> that.
> So as i wrote in section 2. of my first message in this conversation - my
> stream should have processed the "asd" message again because it is not
> committed yet.
> That's why i suspect it has something to do with the stream's cache; maybe
> something like:
> 1. "asd" got processed and restored in cache
> 2. "{}" got processed and cached too.
> 3. commit interval makes the stream commit the offset of "{}"
>
>
> B.t.w
> If you want to run my application you should:
> 1. open it in some java editor as maven project
> 2. run it as a normal java application
> 3. setup kafka server & zookeeper on localhost
> 4. then you can send the above messages via cli
>
> John - even if you send "asd1", "asd2", "asd3" you will see in the logs
> that my app takes the latest each time
>
> Of course that's far beyond what i can ask from you guys to do, thanks a
> lot for your help.
>
> On Wed, Sep 12, 2018 at 8:14 PM John Roesler  wrote:
>
> > Hi!
> >
> > As Adam said, if you throw an exception during processing, it should
> cause
> > Streams to shut itself down and *not* commit that message. Therefore,
> when
> > you start up again, it should again attempt to process that same message
> > (and shut down again).
> >
> > Within a single partition, messages are processed in order, so a bad
> > message will block the queue, and you should not see subsequent messages
> > get processed.
> >
> > However, if your later message "{}" goes to a different partition than
> the
> > bad message, then there's no relationship between them, and the later,
> > good, message might get processed.
> >
> > Does that help?
> > -John
> >
> > On Wed, Sep 12, 2018 at 8:38 AM Adam Bellemare  >
> > wrote:
> >
> > > Hi Yui Yoi
> > >
> > >
> > > Keep in mind that Kafka Consumers don't traditionally request only a
> > single
> > > message at a time, but instead requests them in batches. This allows
> for
> > > much higher throughput, but does result in the scenario of
> > "at-least-once"
> > > processing. Generally what will happen in this scenario is the
> following:
> > >
> > > 1) Client requests the next set of messages from offset (t). For
> example,
> > > assume it gets 10 messages and message 6 is "bad".
> > > 2) The client's processor will then process the messages one at a time.
> > > Note that the offsets are not committed after the message is processed,
> > but
> > > only at the end of the batch.
> > > 3) The bad message it hit by the processor. At this point you can
> decide
> > to
> > > skip the message, throw an exception, etc.
> > > 4a) If you decide to skip the message, processing will continue. Once
> all
> > > 10 messages are processed, the new offset (t+10) offset is committed
> back
> > > to Kafka.
> > > 4b) If you decide to throw an exception and terminate your app, you
> will
> > > have still processed the messages that came before the bad message.
> > Because
> > > the offset (t+10) is not committed, the next time you start the app it
> > will
> > > consume from offset t, and those messages will be processed again. This
> > is
> > > "at-least-once" processing.
> > >
> > >
> > > Now, if you need exactly-once processing, you have two choices -
> > > 1) Use Kafka Streams with exactly-once semantics (though, as I am not
> > > familiar with your framework, it may support it as well).
> > > 2) Use idempotent practices (ie: it doesn't matter if the same messages
> > get
> > > processed more than once).
> > >
> > >
> > > Hope this helps -
> > >
> > > Adam
> > >
> > >
> > > On Wed, Sep 12, 2018 at 7:59 AM, Yui Yoi  wrote:
> > >
> > > > Hi Adam,
> > > > Thanks a lot for the rapid response, it did helped!
> > > >
> > > > Let me though ask one more simple question: Can I make a stream
> > > application
> > > > stuck on an invalid message? and not consuming any further messages?
> > > >
> > > > Thanks again
> > > >
> > > > On Wed, Sep 12, 2018 at 2:35 PM Adam Bellemare <
> > adam.bellem...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi Yui Yoi
> > > > >
> > > > > Preface: I am not familiar with the spring framework.
> > > > >
> > > > > "Earliest" when it comes to consuming from Kafka means, "Start
> > reading
> > > > from
> > > > > the first message in the topic, *if there is no offset stored for
> > that
> > > > > consumer group*". It 

Re: [DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-18 Thread Harsha
Hi Manikumar,
I am interested to know the reason for exposing this config, given a 
user has access to PrincipalBuilder interface to build their interpretation of 
an identity from the X509 certificates. Is this to simplify extraction of 
identity? and also there are other use cases where user's will extract 
SubjectAltName to construct the identity I guess thats not going to supported 
by this method.

Thanks,
Harsha

On Tue, Sep 18, 2018, at 8:25 AM, Manikumar wrote:
> Hi Rajini,
> 
> I don't have strong reasons for rejecting Option 2. I just felt Option 1 is
> sufficient for
> the common use-cases (extracting single field, like CN etc..).
> 
> We are open to go with Option 2, for more flexible mapping mechanism.
> Let us know, your preference.
> 
> Thanks,
> 
> 
> On Tue, Sep 18, 2018 at 8:05 PM Rajini Sivaram 
> wrote:
> 
> > Hi Manikumar,
> >
> > It wasn't entirely clear to me why Option 2 was rejected.
> >
> > On Tue, Sep 18, 2018 at 7:47 AM, Manikumar 
> > wrote:
> >
> > > Hi All,
> > >
> > > We would like to go with Option 1, which adds a new configuration
> > parameter
> > > pair of the form:
> > > ssl.principal.mapping.pattern, ssl.principal.mapping.value. This will
> > > fulfill the requirement for most of the common use cases.
> > >
> > > We would like to include the KIP in the upcoming release. If there no
> > > concerns, would like to start vote on this KIP.
> > >
> > > Thanks,
> > >
> > > On Fri, Sep 14, 2018 at 11:32 PM Priyank Shah 
> > > wrote:
> > >
> > > > Definitely a helpful change. +1 for Option 2.
> > > >
> > > > On 9/14/18, 10:52 AM, "Manikumar"  wrote:
> > > >
> > > > Hi Eno,
> > > >
> > > > Thanks for the review.
> > > >
> > > > Most often users want to extract one of the field (eg. CN). CN is
> > the
> > > > commonly used field.
> > > > For this simple change, users need to build and maintain the custom
> > > > principal builder class
> > > > and also package and deploy to the all brokers. Having configurable
> > > > rules
> > > > will be useful.
> > > >
> > > > Proposed mapping rules works on string representation of the X.500
> > > > distinguished name(RFC2253 format) [1].
> > > > Mapping rules can use the attribute types keywords defined in RFC
> > > 2253
> > > > (CN,
> > > > L, ST, O, OU, C, STREET, DC, UID).
> > > >
> > > > Any additional/custom attribute types are emitted as OIDs. To emit
> > > > additional attribute type keys,
> > > > we need to have OID -> attribute type keyword String mapping.[2]
> > > >
> > > > For example, String representation of X500Principal("CN=Duke,
> > > > OU=JavaSoft,
> > > > O=Sun Microsystems, C=US, EMAILADDRESS=t...@test.com")
> > > > will be "CN=Duke,OU=JavaSoft,O=Sun
> > > > Microsystems,C=US,1.2.840.113549.1.9.1=#
> > > 160d7465737440746573742e636f6d"
> > > >
> > > > If we have the OID - key mapping ("1.2.840.113549.1.9.1",
> > > > "emailAddress"),
> > > > the string will be
> > > > "CN=Duke,OU=JavaSoft,O=Sun Microsystems,C=US,emailAddress=
> > > > t...@test.com"
> > > >
> > > > Since we are not passing this mapping, we can not extarct using
> > > > additional
> > > > attribute type keyword string.
> > > > If the user want to extract additional attribute keys, we need to
> > > write
> > > > custom principal builder class.
> > > >
> > > > Hope the above helps. Update the KIP.
> > > >
> > > > [1]
> > > >
> > > > https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> > > X500Principal.html#getName(java.lang.String)
> > > >
> > > > [2]
> > > >
> > > > https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> > > X500Principal.html#getName(java.lang.String,%20java.util.Map)
> > > >
> > > > Thanks
> > > >
> > > > On Fri, Sep 14, 2018 at 7:44 PM Eno Thereska <
> > eno.there...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Manikumar, thanks. If I understand the KIP motivation right, this
> > > is
> > > > > currently already possible, but you want to have an easier way of
> > > > doing it,
> > > > > right? The motivation would be stronger if we had 2-3 common
> > cases
> > > > (from
> > > > > experience) where the suggested pattern would solve them, and
> > > > perhaps 1-2
> > > > > cases where the pattern would not be adequate and we'd have to
> > fall
> > > > back to
> > > > > the existing builder class.
> > > > >
> > > > > Thanks
> > > > > Eno
> > > > >
> > > > > On Fri, Sep 14, 2018 at 12:36 PM, Manikumar <
> > > > manikumar.re...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > We'd appreciate any thoughts / comments on the proposed options
> > > for
> > > > > > customizing SSL principal names.
> > > > > > We are happy to discuss any alternative approaches/suggestions.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > On Sat, Sep 8, 2018 at 12:45 AM 

Jenkins build is back to normal : kafka-trunk-jdk8 #2974

2018-09-18 Thread Apache Jenkins Server
See 




Re: Accessing Topology Builder

2018-09-18 Thread Bill Bejeck
Jorge:

I have a crazy idea off the top of my head.

Would something as low-tech using KSteam.peek calls on either side of
certain processors to record start and end times work?

Thanks,
Bill

On Tue, Sep 18, 2018 at 4:38 PM Guozhang Wang  wrote:

> Jorge:
>
> My suggestion was to let your users to implement on the
> TracingProcessorSupplier
> / TracingProcessor directly instead of the base-line ProcessorSupplier /
> Processor. Would that work for you?
>
>
> Guozhang
>
>
> On Tue, Sep 18, 2018 at 8:02 AM, Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > final StreamsBuilder builder = kafkaStreamsTracing.builder();Thanks
> > Guozhang and John.
> >
> > @Guozhang:
> >
> > > I'd suggest to provide a
> > > WrapperProcessorSupplier for the users than modifying
> > > InternalStreamsTopology: more specifically, you can provide an
> > > `abstract WrapperProcessorSupplier
> > > implements ProcessorSupplier` and then let users to instantiate this
> > class
> > > instead of the "bare-metal" interface. WDYT?
> >
> > Yes, in the gist, I have a class implementing `ProcessorSupplier`:
> >
> > ```
> > public class TracingProcessorSupplier implements
> ProcessorSupplier > V> {
> >   final KafkaTracing kafkaTracing;
> >   final String name;
> >   final ProcessorSupplier delegate;
> >public TracingProcessorSupplier(KafkaTracing kafkaTracing,
> >   String name, ProcessorSupplier delegate) {
> > this.kafkaTracing = kafkaTracing;
> > this.name = name;
> > this.delegate = delegate;
> >   }
> >@Override public Processor get() {
> > return new TracingProcessor<>(kafkaTracing, name, delegate.get());
> >   }
> > }
> > ```
> >
> > My challenge is how to wrap Topology Processors created by
> > `StreamsBuilder#build` to make this instrumentation easy to adopt by
> Kafka
> > Streams users.
> >
> > @John:
> >
> > > The diff you posted only contains the library-side changes, and it's
> not
> > > obvious how you would use this to insert the desired tracing code.
> > > Perhaps you could provide a snippet demonstrating how you want to use
> > this
> > > change to enable tracing?
> >
> > My first approach was something like this:
> >
> > ```
> > final StreamsBuilder builder = kafkaStreamsTracing.builder();
> > ```
> >
> > Where `KafkaStreamsTracing#builder` looks like this:
> >
> > ```
> >   public StreamsBuilder builder() {
> > return new StreamsBuilder(new Topology(new
> > TracingInternalTopologyBuilder(kafkaTracing)));
> >   }
> > ```
> >
> > Then, once the builder creates a topology, `processors` will be wrapped
> by
> > `TracingProcessorSupplier` described above.
> >
> > Probably this approach is too naive but works as an initial proof of
> > concept.
> >
> > > Off the top of my head, here are some other approaches you might
> > evaluate:
> > > * you mentioned interceptors. Perhaps we could create a
> > > ProcessorInterceptor interface and add a config to set it.
> >
> > This sounds very interesting to me. Then we won't need to touch internal
> > API's, and just provide some configs. One challenge here is how to define
> > the hooks. In consumer/producer, lifecycle is clear,
> `onConsumer`/`onSend`
> > and then `onCommit`/`onAck` methods. For Stream processors, how this will
> > look like? Maybe `beforeProcess(context, key, value)` and
> > `afterProcess(context, key, value)`.
> >
> > > * perhaps we could simply build the tracing headers into Streams. Is
> > there
> > > a benefit to making it customizable?
> >
> > I don't understand this option completely. Do you mean something like
> > KIP-159 (
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 159%3A+Introducing+Rich+functions+to+Streams
> > )?
> > Headers available on StreamsDSL will allow users to create "custom"
> traces,
> > for instance:
> >
> > ```
> > stream.map( (headers, k, v) -> {
> >   Span span = kafkaTracing.nextSpan(headers).start();
> >   doSomething(k, v);
> >   span.finish();
> > }
> > ```
> >
> > but it won't be possible to instrument the existing processors exposed by
> > DSL only by enabling headers on Streams DSL.
> >
> > If we can define a way to pass a `ProcessorSupplier` to be used by
> > `StreamsBuilder#internalTopology` -not sure if via constructor or some
> > other way- would be enough to support this use-case.
> >
> > > Also, as Matthias said, you would need to create a KIP to propose this
> > > change, but of course we can continue this preliminary discussion until
> > you
> > > feel confident to create the KIP.
> >
> > Happy to do it once the approach is clearer.
> >
> > Cheers,
> > Jorge.
> >
> > El lun., 17 sept. 2018 a las 17:09, John Roesler ()
> > escribió:
> >
> > > If I understand the request, it's about tracking the latencies for a
> > > specific record, not the aggregated latencies for each processor.
> > >
> > > Jorge,
> > >
> > > The diff you posted only contains the library-side changes, and it's
> not
> > > obvious how you would use this to insert the desired tracing code.
> > 

[jira] [Created] (KAFKA-7420) Global stores should be guarded as read-only for regular tasks

2018-09-18 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7420:
--

 Summary: Global stores should be guarded as read-only for regular 
tasks
 Key: KAFKA-7420
 URL: https://issues.apache.org/jira/browse/KAFKA-7420
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Global stores should only be update by the global thread. Any other task, 
should only read from a global store. However, when getting a reference to a 
global store, all tasks have full read/write access to the store.

We should put a guard in place and only return either _(a)_ a read-only store, 
or _(b)_ wrap the store but throw an exception on write for regular tasks.

While the read-only store idea might be cleaner from an API point of view, we 
should consider the second approach for 2 reasons: (1) it's backwards 
compatible (of course, code might fail at runtime, but this seems to be ok, as 
it indicates a bug in the user code anyway) (2) with regard to 
[KIP-358|https://cwiki.apache.org/confluence/display/KAFKA/KIP-358%3A+Migrate+Streams+API+to+Duration+instead+of+long+ms+times],
 we should have the more runtime efficient methods at this level (currently, 
global stores are only key-value stores and this argument falls a little short 
though—however, it might be a good idea to stay future proof; at least, we 
should discuss it).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-09-18 Thread Jun Rao
Hi, Lucas,

When upgrading to a minor release, I think the expectation is that a user
wouldn't need to make any config changes, other than the usual
inter.broker.protocol. If we require other config changes during an
upgrade, then it's probably better to do that in a major release.

Regarding your proposal, I think removing host/advertised_host in favor of
listeners:advertised_listeners seems useful regardless of this KIP.
However, that can probably wait until a major release.

As for the controller listener, I am not sure if one has to set it. To make
a cluster healthy, one sort of have to make sure that the request queue is
never full and no request will be sitting in the request queue for long. If
one does that, setting the controller listener may not be necessary. On the
flip side, even if one sets the controller listener, but the request queue
and the request time for the data part are still high, the cluster may
still not be healthy. Given that we have already started the 2.1 release
planning, perhaps we can start with not requiring the controller listener.
If this is indeed something that everyone wants to set, we can make it a
required config in a major release.

Thanks,

Jun

On Tue, Sep 11, 2018 at 3:46 PM, Lucas Wang  wrote:

> @Jun Rao 
>
> I made the recent config changes after thinking about the default behavior
> for adopting this KIP.
> I think there are basically two options:
> 1. By default, the behavior proposed in this KIP is turned off, and
> operators can turn it
> on by adding the "controller.listener.name" config and entries in the
> "listeners" and "advertised.listeners" list.
> If no "controller.listener.name" is added, it'll be the *same as* the "
> inter.broker.listener.name",
> and the proposed feature is effectively turned off.
> This has been the assumption in the KIP writeup before.
>
> 2. By default, the behavior proposed in this KIP is turned on, and
> operators are forced to
> recognize the proposed change if their "listeners" config is set (this is
> most likely in production environments),
> by allocating a new port for controller connections, and adding a new
> endpoint to the "listeners" config.
> For cases where "listeners" is not set explicitly,
> there needs to be a default value for it that includes the controller
> listener name,
> e.g. "PLAINTEXT://:9092,CONTROLLER://:9091"
>
> I chose to go with option 2 since as author of this KIP,
> I naturally think in the long run, it's worth the effort to adopt this
> feature,
> in order to prevent issues under circumstances listed in the motivation
> section.
>
> 100, following the argument above, I want to enforce the separation
> between controller
> and data plane requests. Hence the "controller.listener.name" should
> never be the same
> as the "inter.broker.listener.name", which is intended for data plane
> requests.
>
> 101, the default value for "listeners" will be
> "PLAINTEXT://:9092,CONTROLLER://:9091",
> making values of "host", and "port" not being used under any config
> settings.
> And the default value for "advertised.listeners" will be derived from
> "listeners",
> making the values of "advertised.host", and "advertised.port" not being
> used any more.
>
> 102, for upgrading, a single broker that has "listeners" and/or
> "advertised.listeners" set,
> must add a new endpoint for the CONTROLLER listener name, or end up using
> the default listeners "PLAINTEXT://:9092,CONTROLLER://:9091".
> During rolling upgrade, in cases of  +  or
>   + 
> we still need to fall back to the current behavior. However after the
> rolling upgrade is done,
> it is guaranteed that the controller plane and data plane are separated,
> given
> the "controller.listener.name" must be different from "
> inter.broker.listener.name".
>
> @Ismael Juma 
> Thanks for pointing that out. I did not know that.
> However my question is if the argument above makes sense, and my code
> change
> causes the configs "host", "port", "advertised.host", "advertised.port" to
> be not used under any circumstance,
> then it's no different from removing them.
> Anyway if there is still a concern about removing them, is there a new
> major new version
> now or in the future where I can remove them?
>
> Thanks!
> Lucas
>
> On Mon, Sep 10, 2018 at 1:30 PM Ismael Juma  wrote:
>
>> To be clear, we can only remove configs in major new versions. Otherwise,
>> we can only deprecate them.
>>
>> Ismael
>>
>> On Mon, Sep 10, 2018 at 10:47 AM Jun Rao  wrote:
>>
>> > Hi, Lucas,
>> >
>> > For the network idlePct, your understanding is correct. Currently,
>> > networkIdlePct metric is calculated as the average of (1 - io-ratio) in
>> the
>> > selector of all network threads.
>> >
>> > The metrics part looks good to me in the updated KIP.
>> >
>> > I am not still not quite sure about the configs.
>> >
>> > 100. "Whenever the "controller.listener.name" is set, upon broker
>> startup,
>> > we will validate its value and make sure it's different from the
>> > 

Re: Accessing Topology Builder

2018-09-18 Thread Guozhang Wang
Jorge:

My suggestion was to let your users to implement on the
TracingProcessorSupplier
/ TracingProcessor directly instead of the base-line ProcessorSupplier /
Processor. Would that work for you?


Guozhang


On Tue, Sep 18, 2018 at 8:02 AM, Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> final StreamsBuilder builder = kafkaStreamsTracing.builder();Thanks
> Guozhang and John.
>
> @Guozhang:
>
> > I'd suggest to provide a
> > WrapperProcessorSupplier for the users than modifying
> > InternalStreamsTopology: more specifically, you can provide an
> > `abstract WrapperProcessorSupplier
> > implements ProcessorSupplier` and then let users to instantiate this
> class
> > instead of the "bare-metal" interface. WDYT?
>
> Yes, in the gist, I have a class implementing `ProcessorSupplier`:
>
> ```
> public class TracingProcessorSupplier implements ProcessorSupplier V> {
>   final KafkaTracing kafkaTracing;
>   final String name;
>   final ProcessorSupplier delegate;
>public TracingProcessorSupplier(KafkaTracing kafkaTracing,
>   String name, ProcessorSupplier delegate) {
> this.kafkaTracing = kafkaTracing;
> this.name = name;
> this.delegate = delegate;
>   }
>@Override public Processor get() {
> return new TracingProcessor<>(kafkaTracing, name, delegate.get());
>   }
> }
> ```
>
> My challenge is how to wrap Topology Processors created by
> `StreamsBuilder#build` to make this instrumentation easy to adopt by Kafka
> Streams users.
>
> @John:
>
> > The diff you posted only contains the library-side changes, and it's not
> > obvious how you would use this to insert the desired tracing code.
> > Perhaps you could provide a snippet demonstrating how you want to use
> this
> > change to enable tracing?
>
> My first approach was something like this:
>
> ```
> final StreamsBuilder builder = kafkaStreamsTracing.builder();
> ```
>
> Where `KafkaStreamsTracing#builder` looks like this:
>
> ```
>   public StreamsBuilder builder() {
> return new StreamsBuilder(new Topology(new
> TracingInternalTopologyBuilder(kafkaTracing)));
>   }
> ```
>
> Then, once the builder creates a topology, `processors` will be wrapped by
> `TracingProcessorSupplier` described above.
>
> Probably this approach is too naive but works as an initial proof of
> concept.
>
> > Off the top of my head, here are some other approaches you might
> evaluate:
> > * you mentioned interceptors. Perhaps we could create a
> > ProcessorInterceptor interface and add a config to set it.
>
> This sounds very interesting to me. Then we won't need to touch internal
> API's, and just provide some configs. One challenge here is how to define
> the hooks. In consumer/producer, lifecycle is clear, `onConsumer`/`onSend`
> and then `onCommit`/`onAck` methods. For Stream processors, how this will
> look like? Maybe `beforeProcess(context, key, value)` and
> `afterProcess(context, key, value)`.
>
> > * perhaps we could simply build the tracing headers into Streams. Is
> there
> > a benefit to making it customizable?
>
> I don't understand this option completely. Do you mean something like
> KIP-159 (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 159%3A+Introducing+Rich+functions+to+Streams
> )?
> Headers available on StreamsDSL will allow users to create "custom" traces,
> for instance:
>
> ```
> stream.map( (headers, k, v) -> {
>   Span span = kafkaTracing.nextSpan(headers).start();
>   doSomething(k, v);
>   span.finish();
> }
> ```
>
> but it won't be possible to instrument the existing processors exposed by
> DSL only by enabling headers on Streams DSL.
>
> If we can define a way to pass a `ProcessorSupplier` to be used by
> `StreamsBuilder#internalTopology` -not sure if via constructor or some
> other way- would be enough to support this use-case.
>
> > Also, as Matthias said, you would need to create a KIP to propose this
> > change, but of course we can continue this preliminary discussion until
> you
> > feel confident to create the KIP.
>
> Happy to do it once the approach is clearer.
>
> Cheers,
> Jorge.
>
> El lun., 17 sept. 2018 a las 17:09, John Roesler ()
> escribió:
>
> > If I understand the request, it's about tracking the latencies for a
> > specific record, not the aggregated latencies for each processor.
> >
> > Jorge,
> >
> > The diff you posted only contains the library-side changes, and it's not
> > obvious how you would use this to insert the desired tracing code.
> > Perhaps you could provide a snippet demonstrating how you want to use
> this
> > change to enable tracing?
> >
> > Also, as Matthias said, you would need to create a KIP to propose this
> > change, but of course we can continue this preliminary discussion until
> you
> > feel confident to create the KIP.
> >
> > Off the top of my head, here are some other approaches you might
> evaluate:
> > * you mentioned interceptors. Perhaps we could create a
> > ProcessorInterceptor interface and add a config to set it.
> > * perhaps we 

Re: Accessing Topology Builder

2018-09-18 Thread John Roesler
Hi Jorge,

Thanks for the clarifications.

Yes, I'm also not sure what "built-in tracing" would look like, and it may
not be a good idea. FWIW, though, I was not thinking of something like
"rich functions". Rather, I was imagining that Streams could just always
record spans in headers as it processes the data, no need for the
interceptor.
However, it would be pretty complicated if you only wanted to record spans
for certain records, or a certain percentage of records. This is where a
tracing interceptor would look more attractive.

On the actual interceptor, offhand, I suppose two main options are
available:
1. define interceptor interfaces with hooks into the lifecycle, like you
gave. (before/after process might work, but in general, it might be more
like beforeProcess, beforeForward)
2. just let the interceptor implement the same Processor interface, but
additionally have access to the "delegate" processor somehow?

What approach did you take in your TracingProcessor?

Thanks,
-John

On Tue, Sep 18, 2018 at 10:02 AM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> final StreamsBuilder builder = kafkaStreamsTracing.builder();Thanks
> Guozhang and John.
>
> @Guozhang:
>
> > I'd suggest to provide a
> > WrapperProcessorSupplier for the users than modifying
> > InternalStreamsTopology: more specifically, you can provide an
> > `abstract WrapperProcessorSupplier
> > implements ProcessorSupplier` and then let users to instantiate this
> class
> > instead of the "bare-metal" interface. WDYT?
>
> Yes, in the gist, I have a class implementing `ProcessorSupplier`:
>
> ```
> public class TracingProcessorSupplier implements ProcessorSupplier V> {
>   final KafkaTracing kafkaTracing;
>   final String name;
>   final ProcessorSupplier delegate;
>public TracingProcessorSupplier(KafkaTracing kafkaTracing,
>   String name, ProcessorSupplier delegate) {
> this.kafkaTracing = kafkaTracing;
> this.name = name;
> this.delegate = delegate;
>   }
>@Override public Processor get() {
> return new TracingProcessor<>(kafkaTracing, name, delegate.get());
>   }
> }
> ```
>
> My challenge is how to wrap Topology Processors created by
> `StreamsBuilder#build` to make this instrumentation easy to adopt by Kafka
> Streams users.
>
> @John:
>
> > The diff you posted only contains the library-side changes, and it's not
> > obvious how you would use this to insert the desired tracing code.
> > Perhaps you could provide a snippet demonstrating how you want to use
> this
> > change to enable tracing?
>
> My first approach was something like this:
>
> ```
> final StreamsBuilder builder = kafkaStreamsTracing.builder();
> ```
>
> Where `KafkaStreamsTracing#builder` looks like this:
>
> ```
>   public StreamsBuilder builder() {
> return new StreamsBuilder(new Topology(new
> TracingInternalTopologyBuilder(kafkaTracing)));
>   }
> ```
>
> Then, once the builder creates a topology, `processors` will be wrapped by
> `TracingProcessorSupplier` described above.
>
> Probably this approach is too naive but works as an initial proof of
> concept.
>
> > Off the top of my head, here are some other approaches you might
> evaluate:
> > * you mentioned interceptors. Perhaps we could create a
> > ProcessorInterceptor interface and add a config to set it.
>
> This sounds very interesting to me. Then we won't need to touch internal
> API's, and just provide some configs. One challenge here is how to define
> the hooks. In consumer/producer, lifecycle is clear, `onConsumer`/`onSend`
> and then `onCommit`/`onAck` methods. For Stream processors, how this will
> look like? Maybe `beforeProcess(context, key, value)` and
> `afterProcess(context, key, value)`.
>
> > * perhaps we could simply build the tracing headers into Streams. Is
> there
> > a benefit to making it customizable?
>
> I don't understand this option completely. Do you mean something like
> KIP-159 (
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-159%3A+Introducing+Rich+functions+to+Streams
> )?
> Headers available on StreamsDSL will allow users to create "custom" traces,
> for instance:
>
> ```
> stream.map( (headers, k, v) -> {
>   Span span = kafkaTracing.nextSpan(headers).start();
>   doSomething(k, v);
>   span.finish();
> }
> ```
>
> but it won't be possible to instrument the existing processors exposed by
> DSL only by enabling headers on Streams DSL.
>
> If we can define a way to pass a `ProcessorSupplier` to be used by
> `StreamsBuilder#internalTopology` -not sure if via constructor or some
> other way- would be enough to support this use-case.
>
> > Also, as Matthias said, you would need to create a KIP to propose this
> > change, but of course we can continue this preliminary discussion until
> you
> > feel confident to create the KIP.
>
> Happy to do it once the approach is clearer.
>
> Cheers,
> Jorge.
>
> El lun., 17 sept. 2018 a las 17:09, John Roesler ()
> escribió:
>
> > If I understand the request, it's about tracking the 

Semi Proposal: Unclean Leadership Elections and Out of Sync Brokers

2018-09-18 Thread Christopher Vollick
Hey! I’m seeing behaviour I didn’t expect, and I’d like some expert opinions on 
it.

(TL;DR min.insync.replicas=2 and acks=all isn’t enough to tolerate losing a 
leader transparently and safely, but maybe it could be)

Lets say we have a topic with one partition, replication factor 3.
Leader 0, Followers 1,2
I have min.insync.replicas=2, and my producer has acks=all

Ok, so let’s say there’s an incident, and Broker 2 dies (or is unable to 
replicate).
That’s still fine, and the producer keeps going.

Then Broker 1 dies (or is unable to replicate).
Now the ISR list will just be Broker 0, and production will stop because of 
min.insync.replicas and acks=all.

Now, Broker 0 dies.
The partition is now offline, which is expected.

Let’s say Broker 1 comes back before Broker 0 (or maybe Broker 0 is never 
coming back).
There’s no reason now why Broker 1 can’t assume leadership. Due to acks=all 
nothing was produced while it was gone, but it currently can’t know that. When 
it comes back, it sees it’s not in the list of ISRs and assumes it may have 
missed something.

The only way to get out of this situation is to enable unclean leadership 
elections, and in this case if Broker 1 assumed leadership that’d be fine, but 
if Broker 2 assumed leadership then there’d be consumer offset reset, since 
Broker 1 has messages Broker 2 doesn’t.

So that’s the setup.
So, really all that min.insync.replicas=2 gets me is less Availability earlier, 
and if Broker 0 comes back, then consistency.
But if it doesn’t, I might lose consistency anyway.
Let me know if I’m wrong about something in there.

Proposal:
What we need is to track is whether a broker is missing messages. We obviously 
don’t want to spam ZK each time we get a message, and we don’t want to change 
what ISR means because so many things (including acks=all) rely on that.
So what if we added (for each partition) a list of “out of sync” partitions.
Basically when a broker gets removed from the ISR list, the first time (and 
only the first time) a produce is accepted they’re added to the OSRs.

That way if Broker 1 comes back it can see that Broker 0 is the leader, but 
before Broker 0 died it never got any messages Broker 1 doesn’t already have.
So Broker 1 is now safe to take leadership without having to enable unclean 
leadership, and start syncing to Broker 2, and once that’s in sync with what it 
missed, we’re now within min.insync.replicas and things become available again.

There are still some edges in here.
Like maybe instead of having things be implicit (“I’m not in this list, so I 
must be ok”) having an explicit ISRs and UpToDateRs we only take things out of 
when we receive a message and they’re not in ISR.

Or like what if Broker 1 suffered corruption or data-loss. It probably 
shouldn’t be able to just start up, so maybe instead of a broker list there’s a 
kind of highwater-mark we set to the current offset when we remove them from 
ISR, and then remove when we get a new message.
So they can see on startup “I’m in sync if I have THIS message, which I do, so 
I’m the leader now”

Basically I just wanted to know if this is ridiculous, or if I’m 
misunderstanding, or if I should make a KIP or what happens here.
Because right now it feels like setting min.insync.replicas=2 doesn’t actually 
give me much in a scenario like I outlined above.
With min.insync.replicas=1 producers would think they succeeded, and then the 
messages would be lost if Broker 0 went offline.
But if I need to enable unclean leadership election anyway to recover, then 
those messages might be lost anyway, no?

Thanks for reading!

RE: kafka dev

2018-09-18 Thread Erica Parker
Hi, Are you interested in the Kafka position I have.

Erica Parker | Senior Technical Recruiter
INT Technologies – A Nationally Recognized Veteran Owned Business
C: (720) 483-7059
W: (720)735-4829

Stay connected with us:
Ask your INT representative about our H1B transfer program.
Please visit www.inttechnologies.com to apply to jobs in your area.

-Original Message-
From: rudy_steiner [mailto:rudy_stei...@163.com]
Sent: Tuesday, September 18, 2018 3:20 AM
To: dev@kafka.apache.org
Subject: kafka dev

hi,
   kafka


Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-18 Thread Konstantin Chukhlomin
+1 (non binding)

> On Sep 18, 2018, at 1:18 PM, michael.kamin...@nytimes.com wrote:
> 
> 
> 
> On 2018/09/18 14:59:09, Ron Dagostino  wrote: 
>> Hi everyone.  I would like to start the vote for KIP-368:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
>> 
>> This KIP proposes adding the ability for SASL clients (and brokers when a
>> SASL mechanism is the inter-broker protocol) to re-authenticate their
>> connections to brokers and for brokers to close connections that continue
>> to use expired sessions.
>> 
>> Ron
>> 
> 
> +1 (non binding)



Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-18 Thread michael . kaminski



On 2018/09/18 14:59:09, Ron Dagostino  wrote: 
> Hi everyone.  I would like to start the vote for KIP-368:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
> 
> This KIP proposes adding the ability for SASL clients (and brokers when a
> SASL mechanism is the inter-broker protocol) to re-authenticate their
> connections to brokers and for brokers to close connections that continue
> to use expired sessions.
> 
> Ron
> 

+1 (non binding) Thanks for taking this on, Ron.


Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-18 Thread michael . kaminski



On 2018/09/18 14:59:09, Ron Dagostino  wrote: 
> Hi everyone.  I would like to start the vote for KIP-368:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
> 
> This KIP proposes adding the ability for SASL clients (and brokers when a
> SASL mechanism is the inter-broker protocol) to re-authenticate their
> connections to brokers and for brokers to close connections that continue
> to use expired sessions.
> 
> Ron
> 

+1 (non binding)


Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-18 Thread Boerge Svingen


+1 (non-bonding)

We would very much like to see this merged.


Thanks,
Boerge.



> On Sep 18, 2018, at 11:51, Edoardo Comar  wrote:
> 
> +1 (non binding)
> 
> Thanks Ron & Rajini !
> --
> 
> Edoardo Comar
> 
> IBM Message Hub
> 
> IBM UK Ltd, Hursley Park, SO21 2JN
> 
> 
> 
> From:   Ron Dagostino 
> To: dev@kafka.apache.org
> Date:   18/09/2018 16:09
> Subject:[VOTE] KIP 368: Allow SASL Connections to Periodically 
> Re-Authenticate
> 
> 
> 
> Hi everyone.  I would like to start the vote for KIP-368:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
> 
> 
> This KIP proposes adding the ability for SASL clients (and brokers when a
> SASL mechanism is the inter-broker protocol) to re-authenticate their
> connections to brokers and for brokers to close connections that continue
> to use expired sessions.
> 
> Ron
> 
> 
> 
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number 
> 741598. 
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-18 Thread Edoardo Comar
+1 (non binding)

Thanks Ron & Rajini !
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Ron Dagostino 
To: dev@kafka.apache.org
Date:   18/09/2018 16:09
Subject:[VOTE] KIP 368: Allow SASL Connections to Periodically 
Re-Authenticate



Hi everyone.  I would like to start the vote for KIP-368:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate


This KIP proposes adding the ability for SASL clients (and brokers when a
SASL mechanism is the inter-broker protocol) to re-authenticate their
connections to brokers and for brokers to close connections that continue
to use expired sessions.

Ron



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [VOTE] KIP-372: Naming Repartition Topics for Joins and Grouping

2018-09-18 Thread Bill Bejeck
All,

In starting work on the PR for KIP-372, the Grouped interface needed some
method renaming to be more consistent with the other configuration classes
(Joined, Produced, etc.).  As such I've updated the Grouped code section of
the KIP.

As these changes address a comment from Matthias on the initial draft of
the KIP and don't change any of the existing behavior already outlined,  I
don't think a re-vote is required.

Thanks,
Bill

On Tue, Sep 18, 2018 at 10:09 AM John Roesler  wrote:

> +1 (non-binding)
>
> Thanks!
>
> On Mon, Sep 17, 2018 at 7:29 PM Dongjin Lee  wrote:
>
> > Great improvements. +1. (Non-binding)
> >
> > On Tue, Sep 18, 2018 at 5:14 AM Matthias J. Sax 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > -Matthias
> > >
> > > On 9/17/18 1:12 PM, Guozhang Wang wrote:
> > > > +1 from me, thanks Bill !
> > > >
> > > > On Mon, Sep 17, 2018 at 12:43 PM, Bill Bejeck 
> > wrote:
> > > >
> > > >> All,
> > > >>
> > > >> I'd like to start the voting process for KIP-372.  Here's the link
> to
> > > the
> > > >> updated proposal
> > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> 372%3A+Naming+Repartition+Topics+for+Joins+and+Grouping
> > > >>
> > > >> I'll start with my own +1.
> > > >>
> > > >> Thanks,
> > > >> Bill
> > > >>
> > > >
> > > >
> > > >
> > >
> > >
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> > *github:  github.com/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > slideshare:
> > www.slideshare.net/dongjinleekr
> > *
> >
>


Re: [DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-18 Thread Manikumar
Hi Rajini,

I don't have strong reasons for rejecting Option 2. I just felt Option 1 is
sufficient for
the common use-cases (extracting single field, like CN etc..).

We are open to go with Option 2, for more flexible mapping mechanism.
Let us know, your preference.

Thanks,


On Tue, Sep 18, 2018 at 8:05 PM Rajini Sivaram 
wrote:

> Hi Manikumar,
>
> It wasn't entirely clear to me why Option 2 was rejected.
>
> On Tue, Sep 18, 2018 at 7:47 AM, Manikumar 
> wrote:
>
> > Hi All,
> >
> > We would like to go with Option 1, which adds a new configuration
> parameter
> > pair of the form:
> > ssl.principal.mapping.pattern, ssl.principal.mapping.value. This will
> > fulfill the requirement for most of the common use cases.
> >
> > We would like to include the KIP in the upcoming release. If there no
> > concerns, would like to start vote on this KIP.
> >
> > Thanks,
> >
> > On Fri, Sep 14, 2018 at 11:32 PM Priyank Shah 
> > wrote:
> >
> > > Definitely a helpful change. +1 for Option 2.
> > >
> > > On 9/14/18, 10:52 AM, "Manikumar"  wrote:
> > >
> > > Hi Eno,
> > >
> > > Thanks for the review.
> > >
> > > Most often users want to extract one of the field (eg. CN). CN is
> the
> > > commonly used field.
> > > For this simple change, users need to build and maintain the custom
> > > principal builder class
> > > and also package and deploy to the all brokers. Having configurable
> > > rules
> > > will be useful.
> > >
> > > Proposed mapping rules works on string representation of the X.500
> > > distinguished name(RFC2253 format) [1].
> > > Mapping rules can use the attribute types keywords defined in RFC
> > 2253
> > > (CN,
> > > L, ST, O, OU, C, STREET, DC, UID).
> > >
> > > Any additional/custom attribute types are emitted as OIDs. To emit
> > > additional attribute type keys,
> > > we need to have OID -> attribute type keyword String mapping.[2]
> > >
> > > For example, String representation of X500Principal("CN=Duke,
> > > OU=JavaSoft,
> > > O=Sun Microsystems, C=US, EMAILADDRESS=t...@test.com")
> > > will be "CN=Duke,OU=JavaSoft,O=Sun
> > > Microsystems,C=US,1.2.840.113549.1.9.1=#
> > 160d7465737440746573742e636f6d"
> > >
> > > If we have the OID - key mapping ("1.2.840.113549.1.9.1",
> > > "emailAddress"),
> > > the string will be
> > > "CN=Duke,OU=JavaSoft,O=Sun Microsystems,C=US,emailAddress=
> > > t...@test.com"
> > >
> > > Since we are not passing this mapping, we can not extarct using
> > > additional
> > > attribute type keyword string.
> > > If the user want to extract additional attribute keys, we need to
> > write
> > > custom principal builder class.
> > >
> > > Hope the above helps. Update the KIP.
> > >
> > > [1]
> > >
> > > https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> > X500Principal.html#getName(java.lang.String)
> > >
> > > [2]
> > >
> > > https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> > X500Principal.html#getName(java.lang.String,%20java.util.Map)
> > >
> > > Thanks
> > >
> > > On Fri, Sep 14, 2018 at 7:44 PM Eno Thereska <
> eno.there...@gmail.com
> > >
> > > wrote:
> > >
> > > > Manikumar, thanks. If I understand the KIP motivation right, this
> > is
> > > > currently already possible, but you want to have an easier way of
> > > doing it,
> > > > right? The motivation would be stronger if we had 2-3 common
> cases
> > > (from
> > > > experience) where the suggested pattern would solve them, and
> > > perhaps 1-2
> > > > cases where the pattern would not be adequate and we'd have to
> fall
> > > back to
> > > > the existing builder class.
> > > >
> > > > Thanks
> > > > Eno
> > > >
> > > > On Fri, Sep 14, 2018 at 12:36 PM, Manikumar <
> > > manikumar.re...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > We'd appreciate any thoughts / comments on the proposed options
> > for
> > > > > customizing SSL principal names.
> > > > > We are happy to discuss any alternative approaches/suggestions.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > On Sat, Sep 8, 2018 at 12:45 AM Manikumar <
> > > manikumar.re...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I have created a KIP that proposes couple of options for
> > building
> > > > custom
> > > > > > SSL principal names.
> > > > > >
> > > > > >
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
> > > > > >
> > > > > > Please take a look.
> > > > > >
> > > > > > Thanks,
> > > > > > Manikumar
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> >
>


Re: Accessing Topology Builder

2018-09-18 Thread Jorge Esteban Quilcate Otoya
final StreamsBuilder builder = kafkaStreamsTracing.builder();Thanks
Guozhang and John.

@Guozhang:

> I'd suggest to provide a
> WrapperProcessorSupplier for the users than modifying
> InternalStreamsTopology: more specifically, you can provide an
> `abstract WrapperProcessorSupplier
> implements ProcessorSupplier` and then let users to instantiate this class
> instead of the "bare-metal" interface. WDYT?

Yes, in the gist, I have a class implementing `ProcessorSupplier`:

```
public class TracingProcessorSupplier implements ProcessorSupplier {
  final KafkaTracing kafkaTracing;
  final String name;
  final ProcessorSupplier delegate;
   public TracingProcessorSupplier(KafkaTracing kafkaTracing,
  String name, ProcessorSupplier delegate) {
this.kafkaTracing = kafkaTracing;
this.name = name;
this.delegate = delegate;
  }
   @Override public Processor get() {
return new TracingProcessor<>(kafkaTracing, name, delegate.get());
  }
}
```

My challenge is how to wrap Topology Processors created by
`StreamsBuilder#build` to make this instrumentation easy to adopt by Kafka
Streams users.

@John:

> The diff you posted only contains the library-side changes, and it's not
> obvious how you would use this to insert the desired tracing code.
> Perhaps you could provide a snippet demonstrating how you want to use this
> change to enable tracing?

My first approach was something like this:

```
final StreamsBuilder builder = kafkaStreamsTracing.builder();
```

Where `KafkaStreamsTracing#builder` looks like this:

```
  public StreamsBuilder builder() {
return new StreamsBuilder(new Topology(new
TracingInternalTopologyBuilder(kafkaTracing)));
  }
```

Then, once the builder creates a topology, `processors` will be wrapped by
`TracingProcessorSupplier` described above.

Probably this approach is too naive but works as an initial proof of
concept.

> Off the top of my head, here are some other approaches you might evaluate:
> * you mentioned interceptors. Perhaps we could create a
> ProcessorInterceptor interface and add a config to set it.

This sounds very interesting to me. Then we won't need to touch internal
API's, and just provide some configs. One challenge here is how to define
the hooks. In consumer/producer, lifecycle is clear, `onConsumer`/`onSend`
and then `onCommit`/`onAck` methods. For Stream processors, how this will
look like? Maybe `beforeProcess(context, key, value)` and
`afterProcess(context, key, value)`.

> * perhaps we could simply build the tracing headers into Streams. Is there
> a benefit to making it customizable?

I don't understand this option completely. Do you mean something like
KIP-159 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-159%3A+Introducing+Rich+functions+to+Streams
)?
Headers available on StreamsDSL will allow users to create "custom" traces,
for instance:

```
stream.map( (headers, k, v) -> {
  Span span = kafkaTracing.nextSpan(headers).start();
  doSomething(k, v);
  span.finish();
}
```

but it won't be possible to instrument the existing processors exposed by
DSL only by enabling headers on Streams DSL.

If we can define a way to pass a `ProcessorSupplier` to be used by
`StreamsBuilder#internalTopology` -not sure if via constructor or some
other way- would be enough to support this use-case.

> Also, as Matthias said, you would need to create a KIP to propose this
> change, but of course we can continue this preliminary discussion until
you
> feel confident to create the KIP.

Happy to do it once the approach is clearer.

Cheers,
Jorge.

El lun., 17 sept. 2018 a las 17:09, John Roesler ()
escribió:

> If I understand the request, it's about tracking the latencies for a
> specific record, not the aggregated latencies for each processor.
>
> Jorge,
>
> The diff you posted only contains the library-side changes, and it's not
> obvious how you would use this to insert the desired tracing code.
> Perhaps you could provide a snippet demonstrating how you want to use this
> change to enable tracing?
>
> Also, as Matthias said, you would need to create a KIP to propose this
> change, but of course we can continue this preliminary discussion until you
> feel confident to create the KIP.
>
> Off the top of my head, here are some other approaches you might evaluate:
> * you mentioned interceptors. Perhaps we could create a
> ProcessorInterceptor interface and add a config to set it.
> * perhaps we could simply build the tracing headers into Streams. Is there
> a benefit to making it customizable?
>
> Thanks for considering this problem!
> -John
>
> On Mon, Sep 17, 2018 at 12:30 AM Guozhang Wang  wrote:
>
> > Hello Jorge,
> >
> > From the TracingProcessor implementation it seems you want to track
> > per-processor processing latency, is that right? If this is the case you
> > can actually use the per-processor metrics which include latency sensors.
> >
> > If you do want to track, for a certain record, what's the latency of
> > processing it, then you'd 

[VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-18 Thread Ron Dagostino
Hi everyone.  I would like to start the vote for KIP-368:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate

This KIP proposes adding the ability for SASL clients (and brokers when a
SASL mechanism is the inter-broker protocol) to re-authenticate their
connections to brokers and for brokers to close connections that continue
to use expired sessions.

Ron


Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-18 Thread Ron Dagostino
HI Rajini.  The KIP is updated as summarized below, and I will start a vote
immediately.

<<
wrote:

> Hi Ron,
>
> Thanks for the updates. The KIP looks good. A few comments and minor points
> below, but feel free to start vote to try and get it into 2.1.0. More
> community feedback will be really useful.
>
> 1) It may be useful to have a metric of expired connections killed by the
> broker. There could be a client implementation that doesn't support
> re-authentications, but happens to use the latest version of
> SaslAuthenticateRequest. Or cases where re-authentication didn't happen on
> time.
>
> 2) For `successful-v0-authentication-{rate,total}`, we probably want
> version as a tag rather in the name. Not sure if we need four of these
> (rate/total with success/failure). Perhaps just success/total is
> sufficient?
>
> 3) For the session lifetime config, we don't need to require a listener or
> mechanism prefix. In most cases, we would expect a single config on the
> broker-side. For all channel configs, we allow an optional listener prefix,
> so we should do the same here.
>
> 4) The KIP says connections are terminated on requests not related to
> re-authentication (ApiVersionsRequest, SaslHandshakeRequest, and
> SaslAuthenticateRequest). We can skip for ApiVersionsRequest for
> re-authentication, so that doesn't need to be in the list.
>
> 5) The KIP says that the new config will not be dynamically updatable. We
> have a very limited set of configs that are dynamically updatable for an
> existing listener. And we don't want to add this config to the list since
> we don't expect this value to change frequently. But we allow new listeners
> to be added dynamically and all configs for the listener can be added
> dynamically (with the listener prefix). I think we want to allow that for
> this config (i.e. add a new OAuth listener with re-authentication enabled).
> We should mention this in the KIP, though in terms of implementation, I
> would leave that for a separate JIRA (it doesn't need to be implemented at
> the same time).
>
>
>
> On Tue, Sep 18, 2018 at 3:06 AM, Ron Dagostino  wrote:
>
> > HI again, Rajini.  Would we ever want the max session time to be
> different
> > across different SASL mechanisms?  I'm wondering, now that we are
> > supporting all SASL mechanisms via this KIP, if we still need to prefix
> > this config with the "[listener].[mechanism]." prefix.  I've kept the
> > prefix in the KIP for now, but it would be easier to just set it once for
> > all mechanisms, and I don't see that as being a problem.  Let me know
> what
> > you think.
> >
> > Ron
> >
> > On Mon, Sep 17, 2018 at 9:51 PM Ron Dagostino  wrote:
> >
> > > Hi Rajini.  The KIP is updated.  Aside from a once-over to make sure it
> > is
> > > all accurate, I think we need to confirm the metrics.  The decision to
> > not
> > > reject authentications that use tokens with too-long a lifetime allowed
> > the
> > > metrics to be simpler.  I decided that in addition to tracking these
> > > metrics on the broker:
> > >
> > > failed-reauthentication-{rate,total} and
> > > successful-reauthentication-{rate,total}
> > >
> > > we simply need one more set of broker metrics to track the subset of
> > > clients clients that are not upgraded to v2.1.0 and are still using a
> V0
> > > SaslAuthenticateRequest:
> > >
> > > failed-v0-authentication-{rate,total} and
> > > successful-v0-authentication-{rate,total}
> > >
> > > See the Migration section of the KIP for details of how this would be
> > used.
> > >
> > > I wonder if we need a broker metric documenting the number of "expired"
> > > sessions killed by the broker since it would be the same as
> > > successful-v0-authentication-total. I've eliminated that from the KIP
> > for
> > > now.  Thoughts?
> > >
> > > There is also a client-side metric for re-authentication latency
> tracking
> > > (still unnamed -- do you have a preference?)
> > >
> > > I think we're close to being able to put this KIP up for a vote.
> > >
> > > Ron
> > >
> > >
> > > On Mon, Sep 17, 2018 at 2:45 PM Ron Dagostino 
> wrote:
> > >
> > >> << in
> > >> the KIP
> > >> Right, that was a concern I had mentioned in a follow-up email.  Agree
> > it
> > >> should go alongside errors.
> > >>
> > >> << > have
> > >> << credential
> > >> << not
> > >> Ah, ok, I was not aware that this case existed.  Agreed, for
> > consistency,
> > >> server will always send it back to the client.
> > >>
> > >> << property
> > >> for lifetime]
> > >> Since we now agree that the server will send it back, yes, there is no
> > >> need for this.
> > >>
> > >> << > >> [sasl.login.refresh.reauthenticate.enable]
> > >> I think this might have been more useful when we weren't necessarily
> > >> going to support all SASL mechanisms and/or when the broker was not
> > going
> > >> to advertise the fact that it supported re-authentication.  You are
> > >> correct, now that we support it for all SASL mechanisms and we are
> > bumping
> > >> an API 

Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-18 Thread Edoardo Comar
Yes I should, should't I ? :-) 
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Rajini Sivaram 
To: dev 
Date:   18/09/2018 15:33
Subject:Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all 
DNS resolved IP addresses



Hi Edo,

The KIP looks good, perhaps you can start vote on this to get it in time
for 2.1.0?

Regards,

Rajini

On Tue, Sep 18, 2018 at 1:58 PM, Eno Thereska 
wrote:

> Got it, thanks!
>
> Eno
>
> On Tue, Sep 18, 2018 at 1:41 PM, Edoardo Comar  
wrote:
>
> > Hi Eno,
> >
> > we implement network separation and the machines fronting the brokers
> > (call them LBs for simplicity) will route the connection to the 
correct
> > broker based on the TLS SNI. We register in the DNS multiple A records
> > (the IPs of all LBs) for each the brokers' hostnames.
> > As long as all the brokers are up, the cluster is fully functioning 
even
> > if just one of the LBs is up.
> >
> > HTH,
> > Edo
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> >
> >
> > From:   Eno Thereska 
> > To: dev@kafka.apache.org
> > Date:   18/09/2018 10:24
> > Subject:Re: [DISCUSS] KIP-302 - Enable Kafka clients to use 
all
> > DNS resolved IP addresses
> >
> >
> >
> > Hi folks,
> >
> > Could you expand the motivation a bit? When would it make sense to use 
an
> > LB in front of Kafka brokers? A client needs to access each broker
> > directly
> > to consume the data in that broker and cannot be redirected to another
> > broker. What exact scenario are you seeing that needs this KIP?
> >
> > Thanks
> > Eno
> >
> > On Tue, Sep 18, 2018 at 9:52 AM, Mickael Maison <
> mickael.mai...@gmail.com>
> > wrote:
> >
> > > Bumping this thread
> > >
> > > It's a relatively small change that would help cloud environments 
with
> > > load balancers fronting brokers
> > > On Tue, Sep 11, 2018 at 3:01 PM Edoardo Comar 
> > wrote:
> > > >
> > > > Hi all,
> > > > after some time we updated KIP-302 to reuse the config key 
introduced
> > by
> > > > KIP-235, with a different value to avoid conflicts between the 
two.
> > > > Also clarified the use of multiple IPs only of the same type
> > (IPv4/IPv6).
> > > >
> > > > We'd appreciate a further review and discussion.
> > > > Thanks!
> > > > Edo
> > > >
> > > >
> > > > On Fri, 25 May 2018 at 12:36, Edoardo Comar 
> > wrote:
> > > >
> > > > > Hi Jonathan,
> > > > > I'm ok with an expandable enum for the config that could be
> extended
> > > > > in the future.
> > > > > It is marginally better than multiple potentially conflicting
> config
> > > > > entries.
> > > > >
> > > > > Though as I think the change for KIP-302 is independent from
> KIP-235
> > > > > and they do not conflict,
> > > > > when we'll look back at it post 2.0 we may see if it is more
> > valuable
> > > > > to shoehorn its config in an expanded enum or not
> > > > >
> > > > > thanks,
> > > > > Edo
> > > > >
> > > > > On 24 May 2018 at 16:50, Skrzypek, Jonathan
> > 
> > > > > wrote:
> > > > > > Hi,
> > > > > >
> > > > > > As Rajini suggested in the thread for KIP 235 (attached), we
> could
> > > try
> > > > > to have an enum that would drive what does the client
> > expands/resolves.
> > > > > >
> > > > > > I suggest a client config called client.dns.lookup with 
different
> > > values
> > > > > possible :
> > > > > >
> > > > > > - no : no dns lookup
> > > > > > - hostnames.only : perform dns lookup on both 
bootstrap.servers
> > and
> > > > > advertised listeners
> > > > > > - canonical.hostnames.only : perform dns lookup on both
> > > > > bootstrap.servers and advertised listeners
> > > > > > - bootstrap.hostnames.only : perform dns lookup on
> > bootstrap.servers
> > > > > list and expand it
> > > > > > - bootstrap.canonical.hostnames.only : perform dns lookup on
> > > > > bootstrap.servers list and expand it
> > > > > > - advertised.listeners.hostnames.only : perform dns lookup on
> > > advertised
> > > > > listeners
> > > > > > - advertised.listeners.canonical.hostnames.only : perform dns
> > > lookup on
> > > > > advertised listeners
> > > > > >
> > > > > > I realize this is a bit heavy but this gives users the ability 
to
> > > pick
> > > > > and choose.
> > > > > > I didn't include a setting to mix hostnames and canonical
> > hostnames
> > > as
> > > > > I'm not sure there would be a valid use case.
> > > > > >
> > > > > > Alternatively, to have less possible values, we could have 2
> > > parameters :
> > > > > >
> > > > > > - dns.lookup.type with values : hostname / canonical.host.name
> > > > > > - dns.lookup.behaviour : bootstrap.servers, 
advertised.listeners,
> > > both
> > > > > >
> > > > > > Thoughts ?
> > > > > >
> > > > > > Jonathan Skrzypek
> > > > > >
> > > > > >
> > > > > > -Original Message-
> > > > > > From: Edoardo Comar [mailto:edoco...@gmail.com]
> > > > > > Sent: 17 May 2018 23:50
> > > > > > To: 

[VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-18 Thread Edoardo Comar
Hi All,

I'd like to start the vote on KIP-302:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses

We'd love to get this in 2.1.0 
Kip freeze is just a few days away ... please cast your votes  :-):-)

Thanks!!
Edo

--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-18 Thread Rajini Sivaram
Hi Manikumar,

It wasn't entirely clear to me why Option 2 was rejected.

On Tue, Sep 18, 2018 at 7:47 AM, Manikumar 
wrote:

> Hi All,
>
> We would like to go with Option 1, which adds a new configuration parameter
> pair of the form:
> ssl.principal.mapping.pattern, ssl.principal.mapping.value. This will
> fulfill the requirement for most of the common use cases.
>
> We would like to include the KIP in the upcoming release. If there no
> concerns, would like to start vote on this KIP.
>
> Thanks,
>
> On Fri, Sep 14, 2018 at 11:32 PM Priyank Shah 
> wrote:
>
> > Definitely a helpful change. +1 for Option 2.
> >
> > On 9/14/18, 10:52 AM, "Manikumar"  wrote:
> >
> > Hi Eno,
> >
> > Thanks for the review.
> >
> > Most often users want to extract one of the field (eg. CN). CN is the
> > commonly used field.
> > For this simple change, users need to build and maintain the custom
> > principal builder class
> > and also package and deploy to the all brokers. Having configurable
> > rules
> > will be useful.
> >
> > Proposed mapping rules works on string representation of the X.500
> > distinguished name(RFC2253 format) [1].
> > Mapping rules can use the attribute types keywords defined in RFC
> 2253
> > (CN,
> > L, ST, O, OU, C, STREET, DC, UID).
> >
> > Any additional/custom attribute types are emitted as OIDs. To emit
> > additional attribute type keys,
> > we need to have OID -> attribute type keyword String mapping.[2]
> >
> > For example, String representation of X500Principal("CN=Duke,
> > OU=JavaSoft,
> > O=Sun Microsystems, C=US, EMAILADDRESS=t...@test.com")
> > will be "CN=Duke,OU=JavaSoft,O=Sun
> > Microsystems,C=US,1.2.840.113549.1.9.1=#
> 160d7465737440746573742e636f6d"
> >
> > If we have the OID - key mapping ("1.2.840.113549.1.9.1",
> > "emailAddress"),
> > the string will be
> > "CN=Duke,OU=JavaSoft,O=Sun Microsystems,C=US,emailAddress=
> > t...@test.com"
> >
> > Since we are not passing this mapping, we can not extarct using
> > additional
> > attribute type keyword string.
> > If the user want to extract additional attribute keys, we need to
> write
> > custom principal builder class.
> >
> > Hope the above helps. Update the KIP.
> >
> > [1]
> >
> > https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> X500Principal.html#getName(java.lang.String)
> >
> > [2]
> >
> > https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/
> X500Principal.html#getName(java.lang.String,%20java.util.Map)
> >
> > Thanks
> >
> > On Fri, Sep 14, 2018 at 7:44 PM Eno Thereska  >
> > wrote:
> >
> > > Manikumar, thanks. If I understand the KIP motivation right, this
> is
> > > currently already possible, but you want to have an easier way of
> > doing it,
> > > right? The motivation would be stronger if we had 2-3 common cases
> > (from
> > > experience) where the suggested pattern would solve them, and
> > perhaps 1-2
> > > cases where the pattern would not be adequate and we'd have to fall
> > back to
> > > the existing builder class.
> > >
> > > Thanks
> > > Eno
> > >
> > > On Fri, Sep 14, 2018 at 12:36 PM, Manikumar <
> > manikumar.re...@gmail.com>
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > We'd appreciate any thoughts / comments on the proposed options
> for
> > > > customizing SSL principal names.
> > > > We are happy to discuss any alternative approaches/suggestions.
> > > >
> > > > Thanks,
> > > >
> > > > On Sat, Sep 8, 2018 at 12:45 AM Manikumar <
> > manikumar.re...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I have created a KIP that proposes couple of options for
> building
> > > custom
> > > > > SSL principal names.
> > > > >
> > > > >
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
> > > > >
> > > > > Please take a look.
> > > > >
> > > > > Thanks,
> > > > > Manikumar
> > > > >
> > > >
> > >
> >
> >
> >
>


Jenkins build is back to normal : kafka-trunk-jdk10 #500

2018-09-18 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-18 Thread Rajini Sivaram
Hi Edo,

The KIP looks good, perhaps you can start vote on this to get it in time
for 2.1.0?

Regards,

Rajini

On Tue, Sep 18, 2018 at 1:58 PM, Eno Thereska 
wrote:

> Got it, thanks!
>
> Eno
>
> On Tue, Sep 18, 2018 at 1:41 PM, Edoardo Comar  wrote:
>
> > Hi Eno,
> >
> > we implement network separation and the machines fronting the brokers
> > (call them LBs for simplicity) will route the connection to the correct
> > broker based on the TLS SNI. We register in the DNS multiple A records
> > (the IPs of all LBs) for each the brokers' hostnames.
> > As long as all the brokers are up, the cluster is fully functioning even
> > if just one of the LBs is up.
> >
> > HTH,
> > Edo
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> >
> >
> > From:   Eno Thereska 
> > To: dev@kafka.apache.org
> > Date:   18/09/2018 10:24
> > Subject:Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all
> > DNS resolved IP addresses
> >
> >
> >
> > Hi folks,
> >
> > Could you expand the motivation a bit? When would it make sense to use an
> > LB in front of Kafka brokers? A client needs to access each broker
> > directly
> > to consume the data in that broker and cannot be redirected to another
> > broker. What exact scenario are you seeing that needs this KIP?
> >
> > Thanks
> > Eno
> >
> > On Tue, Sep 18, 2018 at 9:52 AM, Mickael Maison <
> mickael.mai...@gmail.com>
> > wrote:
> >
> > > Bumping this thread
> > >
> > > It's a relatively small change that would help cloud environments with
> > > load balancers fronting brokers
> > > On Tue, Sep 11, 2018 at 3:01 PM Edoardo Comar 
> > wrote:
> > > >
> > > > Hi all,
> > > > after some time we updated KIP-302 to reuse the config key introduced
> > by
> > > > KIP-235, with a different value to avoid conflicts between the two.
> > > > Also clarified the use of multiple IPs only of the same type
> > (IPv4/IPv6).
> > > >
> > > > We'd appreciate a further review and discussion.
> > > > Thanks!
> > > > Edo
> > > >
> > > >
> > > > On Fri, 25 May 2018 at 12:36, Edoardo Comar 
> > wrote:
> > > >
> > > > > Hi Jonathan,
> > > > > I'm ok with an expandable enum for the config that could be
> extended
> > > > > in the future.
> > > > > It is marginally better than multiple potentially conflicting
> config
> > > > > entries.
> > > > >
> > > > > Though as I think the change for KIP-302 is independent from
> KIP-235
> > > > > and they do not conflict,
> > > > > when we'll look back at it post 2.0 we may see if it is more
> > valuable
> > > > > to shoehorn its config in an expanded enum or not
> > > > >
> > > > > thanks,
> > > > > Edo
> > > > >
> > > > > On 24 May 2018 at 16:50, Skrzypek, Jonathan
> > 
> > > > > wrote:
> > > > > > Hi,
> > > > > >
> > > > > > As Rajini suggested in the thread for KIP 235 (attached), we
> could
> > > try
> > > > > to have an enum that would drive what does the client
> > expands/resolves.
> > > > > >
> > > > > > I suggest a client config called client.dns.lookup with different
> > > values
> > > > > possible :
> > > > > >
> > > > > > - no : no dns lookup
> > > > > > - hostnames.only : perform dns lookup on both bootstrap.servers
> > and
> > > > > advertised listeners
> > > > > > - canonical.hostnames.only : perform dns lookup on both
> > > > > bootstrap.servers and advertised listeners
> > > > > > - bootstrap.hostnames.only : perform dns lookup on
> > bootstrap.servers
> > > > > list and expand it
> > > > > > - bootstrap.canonical.hostnames.only : perform dns lookup on
> > > > > bootstrap.servers list and expand it
> > > > > > - advertised.listeners.hostnames.only : perform dns lookup on
> > > advertised
> > > > > listeners
> > > > > > - advertised.listeners.canonical.hostnames.only : perform dns
> > > lookup on
> > > > > advertised listeners
> > > > > >
> > > > > > I realize this is a bit heavy but this gives users the ability to
> > > pick
> > > > > and choose.
> > > > > > I didn't include a setting to mix hostnames and canonical
> > hostnames
> > > as
> > > > > I'm not sure there would be a valid use case.
> > > > > >
> > > > > > Alternatively, to have less possible values, we could have 2
> > > parameters :
> > > > > >
> > > > > > - dns.lookup.type with values : hostname / canonical.host.name
> > > > > > - dns.lookup.behaviour : bootstrap.servers, advertised.listeners,
> > > both
> > > > > >
> > > > > > Thoughts ?
> > > > > >
> > > > > > Jonathan Skrzypek
> > > > > >
> > > > > >
> > > > > > -Original Message-
> > > > > > From: Edoardo Comar [mailto:edoco...@gmail.com]
> > > > > > Sent: 17 May 2018 23:50
> > > > > > To: dev@kafka.apache.org
> > > > > > Subject: Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all
> > DNS
> > > > > resolved IP addresses
> > > > > >
> > > > > > Hi Jonathan,
> > > > > >
> > > > > >> A solution might be to expose to users the choice of using
> > hostname
> > > or
> > > > > canonical host name on both sides.
> > > > 

Re: [VOTE] KIP-372: Naming Repartition Topics for Joins and Grouping

2018-09-18 Thread John Roesler
+1 (non-binding)

Thanks!

On Mon, Sep 17, 2018 at 7:29 PM Dongjin Lee  wrote:

> Great improvements. +1. (Non-binding)
>
> On Tue, Sep 18, 2018 at 5:14 AM Matthias J. Sax 
> wrote:
>
> > +1 (binding)
> >
> > -Matthias
> >
> > On 9/17/18 1:12 PM, Guozhang Wang wrote:
> > > +1 from me, thanks Bill !
> > >
> > > On Mon, Sep 17, 2018 at 12:43 PM, Bill Bejeck 
> wrote:
> > >
> > >> All,
> > >>
> > >> I'd like to start the voting process for KIP-372.  Here's the link to
> > the
> > >> updated proposal
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> 372%3A+Naming+Repartition+Topics+for+Joins+and+Grouping
> > >>
> > >> I'll start with my own +1.
> > >>
> > >> Thanks,
> > >> Bill
> > >>
> > >
> > >
> > >
> >
> >
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
> *github:  github.com/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> slideshare:
> www.slideshare.net/dongjinleekr
> *
>


Build failed in Jenkins: kafka-trunk-jdk8 #2973

2018-09-18 Thread Apache Jenkins Server
See 


Changes:

[lindong28] KAFKA-7332; Update CORRUPT_MESSAGE exception message description

[rajinisivaram] KAFKA-7388 equal sign in property value for password (#5630)

--
[...truncated 2.68 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED


Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-18 Thread Eno Thereska
Got it, thanks!

Eno

On Tue, Sep 18, 2018 at 1:41 PM, Edoardo Comar  wrote:

> Hi Eno,
>
> we implement network separation and the machines fronting the brokers
> (call them LBs for simplicity) will route the connection to the correct
> broker based on the TLS SNI. We register in the DNS multiple A records
> (the IPs of all LBs) for each the brokers' hostnames.
> As long as all the brokers are up, the cluster is fully functioning even
> if just one of the LBs is up.
>
> HTH,
> Edo
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
> From:   Eno Thereska 
> To: dev@kafka.apache.org
> Date:   18/09/2018 10:24
> Subject:Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all
> DNS resolved IP addresses
>
>
>
> Hi folks,
>
> Could you expand the motivation a bit? When would it make sense to use an
> LB in front of Kafka brokers? A client needs to access each broker
> directly
> to consume the data in that broker and cannot be redirected to another
> broker. What exact scenario are you seeing that needs this KIP?
>
> Thanks
> Eno
>
> On Tue, Sep 18, 2018 at 9:52 AM, Mickael Maison 
> wrote:
>
> > Bumping this thread
> >
> > It's a relatively small change that would help cloud environments with
> > load balancers fronting brokers
> > On Tue, Sep 11, 2018 at 3:01 PM Edoardo Comar 
> wrote:
> > >
> > > Hi all,
> > > after some time we updated KIP-302 to reuse the config key introduced
> by
> > > KIP-235, with a different value to avoid conflicts between the two.
> > > Also clarified the use of multiple IPs only of the same type
> (IPv4/IPv6).
> > >
> > > We'd appreciate a further review and discussion.
> > > Thanks!
> > > Edo
> > >
> > >
> > > On Fri, 25 May 2018 at 12:36, Edoardo Comar 
> wrote:
> > >
> > > > Hi Jonathan,
> > > > I'm ok with an expandable enum for the config that could be extended
> > > > in the future.
> > > > It is marginally better than multiple potentially conflicting config
> > > > entries.
> > > >
> > > > Though as I think the change for KIP-302 is independent from KIP-235
> > > > and they do not conflict,
> > > > when we'll look back at it post 2.0 we may see if it is more
> valuable
> > > > to shoehorn its config in an expanded enum or not
> > > >
> > > > thanks,
> > > > Edo
> > > >
> > > > On 24 May 2018 at 16:50, Skrzypek, Jonathan
> 
> > > > wrote:
> > > > > Hi,
> > > > >
> > > > > As Rajini suggested in the thread for KIP 235 (attached), we could
> > try
> > > > to have an enum that would drive what does the client
> expands/resolves.
> > > > >
> > > > > I suggest a client config called client.dns.lookup with different
> > values
> > > > possible :
> > > > >
> > > > > - no : no dns lookup
> > > > > - hostnames.only : perform dns lookup on both bootstrap.servers
> and
> > > > advertised listeners
> > > > > - canonical.hostnames.only : perform dns lookup on both
> > > > bootstrap.servers and advertised listeners
> > > > > - bootstrap.hostnames.only : perform dns lookup on
> bootstrap.servers
> > > > list and expand it
> > > > > - bootstrap.canonical.hostnames.only : perform dns lookup on
> > > > bootstrap.servers list and expand it
> > > > > - advertised.listeners.hostnames.only : perform dns lookup on
> > advertised
> > > > listeners
> > > > > - advertised.listeners.canonical.hostnames.only : perform dns
> > lookup on
> > > > advertised listeners
> > > > >
> > > > > I realize this is a bit heavy but this gives users the ability to
> > pick
> > > > and choose.
> > > > > I didn't include a setting to mix hostnames and canonical
> hostnames
> > as
> > > > I'm not sure there would be a valid use case.
> > > > >
> > > > > Alternatively, to have less possible values, we could have 2
> > parameters :
> > > > >
> > > > > - dns.lookup.type with values : hostname / canonical.host.name
> > > > > - dns.lookup.behaviour : bootstrap.servers, advertised.listeners,
> > both
> > > > >
> > > > > Thoughts ?
> > > > >
> > > > > Jonathan Skrzypek
> > > > >
> > > > >
> > > > > -Original Message-
> > > > > From: Edoardo Comar [mailto:edoco...@gmail.com]
> > > > > Sent: 17 May 2018 23:50
> > > > > To: dev@kafka.apache.org
> > > > > Subject: Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all
> DNS
> > > > resolved IP addresses
> > > > >
> > > > > Hi Jonathan,
> > > > >
> > > > >> A solution might be to expose to users the choice of using
> hostname
> > or
> > > > canonical host name on both sides.
> > > > >> Say having one setting that collapses functionalities from both
> KIPs
> > > > (bootstrap expansion + advertised lookup)
> > > > >> and an additional parameter that defines how the resolution is
> > > > performed, using getCanonicalHostName() or not.
> > > > >
> > > > > thanks sounds to me *less* simple than independent config options,
> > sorry.
> > > > >
> > > > > I would like to say once again that by itself  KIP-302 only speeds
> up
> > > > > the client behavior that can happen anyway when 

Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-18 Thread Edoardo Comar
Hi Eno,

we implement network separation and the machines fronting the brokers 
(call them LBs for simplicity) will route the connection to the correct 
broker based on the TLS SNI. We register in the DNS multiple A records 
(the IPs of all LBs) for each the brokers' hostnames.
As long as all the brokers are up, the cluster is fully functioning even 
if just one of the LBs is up.

HTH,
Edo
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Eno Thereska 
To: dev@kafka.apache.org
Date:   18/09/2018 10:24
Subject:Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all 
DNS resolved IP addresses



Hi folks,

Could you expand the motivation a bit? When would it make sense to use an
LB in front of Kafka brokers? A client needs to access each broker 
directly
to consume the data in that broker and cannot be redirected to another
broker. What exact scenario are you seeing that needs this KIP?

Thanks
Eno

On Tue, Sep 18, 2018 at 9:52 AM, Mickael Maison 
wrote:

> Bumping this thread
>
> It's a relatively small change that would help cloud environments with
> load balancers fronting brokers
> On Tue, Sep 11, 2018 at 3:01 PM Edoardo Comar  
wrote:
> >
> > Hi all,
> > after some time we updated KIP-302 to reuse the config key introduced 
by
> > KIP-235, with a different value to avoid conflicts between the two.
> > Also clarified the use of multiple IPs only of the same type 
(IPv4/IPv6).
> >
> > We'd appreciate a further review and discussion.
> > Thanks!
> > Edo
> >
> >
> > On Fri, 25 May 2018 at 12:36, Edoardo Comar  
wrote:
> >
> > > Hi Jonathan,
> > > I'm ok with an expandable enum for the config that could be extended
> > > in the future.
> > > It is marginally better than multiple potentially conflicting config
> > > entries.
> > >
> > > Though as I think the change for KIP-302 is independent from KIP-235
> > > and they do not conflict,
> > > when we'll look back at it post 2.0 we may see if it is more 
valuable
> > > to shoehorn its config in an expanded enum or not
> > >
> > > thanks,
> > > Edo
> > >
> > > On 24 May 2018 at 16:50, Skrzypek, Jonathan 

> > > wrote:
> > > > Hi,
> > > >
> > > > As Rajini suggested in the thread for KIP 235 (attached), we could
> try
> > > to have an enum that would drive what does the client 
expands/resolves.
> > > >
> > > > I suggest a client config called client.dns.lookup with different
> values
> > > possible :
> > > >
> > > > - no : no dns lookup
> > > > - hostnames.only : perform dns lookup on both bootstrap.servers 
and
> > > advertised listeners
> > > > - canonical.hostnames.only : perform dns lookup on both
> > > bootstrap.servers and advertised listeners
> > > > - bootstrap.hostnames.only : perform dns lookup on 
bootstrap.servers
> > > list and expand it
> > > > - bootstrap.canonical.hostnames.only : perform dns lookup on
> > > bootstrap.servers list and expand it
> > > > - advertised.listeners.hostnames.only : perform dns lookup on
> advertised
> > > listeners
> > > > - advertised.listeners.canonical.hostnames.only : perform dns
> lookup on
> > > advertised listeners
> > > >
> > > > I realize this is a bit heavy but this gives users the ability to
> pick
> > > and choose.
> > > > I didn't include a setting to mix hostnames and canonical 
hostnames
> as
> > > I'm not sure there would be a valid use case.
> > > >
> > > > Alternatively, to have less possible values, we could have 2
> parameters :
> > > >
> > > > - dns.lookup.type with values : hostname / canonical.host.name
> > > > - dns.lookup.behaviour : bootstrap.servers, advertised.listeners,
> both
> > > >
> > > > Thoughts ?
> > > >
> > > > Jonathan Skrzypek
> > > >
> > > >
> > > > -Original Message-
> > > > From: Edoardo Comar [mailto:edoco...@gmail.com]
> > > > Sent: 17 May 2018 23:50
> > > > To: dev@kafka.apache.org
> > > > Subject: Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all 
DNS
> > > resolved IP addresses
> > > >
> > > > Hi Jonathan,
> > > >
> > > >> A solution might be to expose to users the choice of using 
hostname
> or
> > > canonical host name on both sides.
> > > >> Say having one setting that collapses functionalities from both 
KIPs
> > > (bootstrap expansion + advertised lookup)
> > > >> and an additional parameter that defines how the resolution is
> > > performed, using getCanonicalHostName() or not.
> > > >
> > > > thanks sounds to me *less* simple than independent config options,
> sorry.
> > > >
> > > > I would like to say once again that by itself  KIP-302 only speeds 
up
> > > > the client behavior that can happen anyway when the client 
restarts
> > > > multiple times,
> > > > as every time there is no guarantee that - in presence of multiple 
A
> > > > DNS records - the same IP is returned. Attempting to use additiona
> IPs
> > > > if the first fail just makes client recovery faster.
> > > >
> > > > cheers
> > > > Edo
> > > >
> > > > On 17 May 2018 at 12:12, Skrzypek, Jonathan <

Build failed in Jenkins: kafka-trunk-jdk10 #499

2018-09-18 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
remote: Counting objects: 10756, done.
remote: Compressing objects:   2% (1/39)   remote: Compressing objects: 
  5% (2/39)   remote: Compressing objects:   7% (3/39)   
remote: Compressing objects:  10% (4/39)   remote: Compressing objects: 
 12% (5/39)   remote: Compressing objects:  15% (6/39)   
remote: Compressing objects:  17% (7/39)   remote: Compressing objects: 
 20% (8/39)   remote: Compressing objects:  23% (9/39)   
remote: Compressing objects:  25% (10/39)   remote: Compressing 
objects:  28% (11/39)   remote: Compressing objects:  30% (12/39)   
remote: Compressing objects:  33% (13/39)   remote: Compressing 
objects:  35% (14/39)   remote: Compressing objects:  38% (15/39)   
remote: Compressing objects:  41% (16/39)   remote: Compressing 
objects:  43% (17/39)   remote: Compressing objects:  46% (18/39)   
remote: Compressing objects:  48% (19/39)   remote: Compressing 
objects:  51% (20/39)   remote: Compressing objects:  53% (21/39)   
remote: Compressing objects:  56% (22/39)   remote: Compressing 
objects:  58% (23/39)   remote: Compressing objects:  61% (24/39)   
remote: Compressing objects:  64% (25/39)   remote: Compressing 
objects:  66% (26/39)   remote: Compressing objects:  69% (27/39)   
remote: Compressing objects:  71% (28/39)   remote: Compressing 
objects:  74% (29/39)   remote: Compressing objects:  76% (30/39)   
remote: Compressing objects:  79% (31/39)   remote: Compressing 
objects:  82% (32/39)   remote: Compressing objects:  84% (33/39)   
remote: Compressing objects:  87% (34/39)   remote: Compressing 
objects:  89% (35/39)   remote: Compressing objects:  92% (36/39)   
remote: Compressing objects:  94% (37/39)   remote: Compressing 
objects:  97% (38/39)   remote: Compressing objects: 100% (39/39)   
remote: Compressing objects: 100% (39/39), done.
Receiving objects:   0% (1/10756)   Receiving objects:   1% (108/10756)   
Receiving objects:   2% (216/10756)   Receiving objects:   3% (323/10756)   
Receiving objects:   4% (431/10756)   Receiving objects:   5% (538/10756)   
Receiving objects:   6% (646/10756)   Receiving objects:   7% (753/10756)   
Receiving objects:   8% (861/10756)   Receiving objects:   9% (969/10756)   
Receiving objects:  10% (1076/10756)   Receiving objects:  11% (1184/10756)   
Receiving objects:  12% (1291/10756)   Receiving objects:  13% (1399/10756)   
Receiving objects:  14% (1506/10756)   Receiving objects:  15% (1614/10756)   
Receiving objects:  16% (1721/10756)   Receiving objects:  17% (1829/10756)   
Receiving objects:  18% (1937/10756)   Receiving objects:  19% (2044/10756)   
Receiving objects:  20% (2152/10756)   Receiving objects:  21% (2259/10756)   
Receiving objects:  22% (2367/10756)   Receiving objects:  

Build failed in Jenkins: kafka-trunk-jdk10 #498

2018-09-18 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
remote: Counting objects: 10756, done.
remote: Compressing objects:   2% (1/38)   remote: Compressing objects: 
  5% (2/38)   remote: Compressing objects:   7% (3/38)   
remote: Compressing objects:  10% (4/38)   remote: Compressing objects: 
 13% (5/38)   remote: Compressing objects:  15% (6/38)   
remote: Compressing objects:  18% (7/38)   remote: Compressing objects: 
 21% (8/38)   remote: Compressing objects:  23% (9/38)   
remote: Compressing objects:  26% (10/38)   remote: Compressing 
objects:  28% (11/38)   remote: Compressing objects:  31% (12/38)   
remote: Compressing objects:  34% (13/38)   remote: Compressing 
objects:  36% (14/38)   remote: Compressing objects:  39% (15/38)   
remote: Compressing objects:  42% (16/38)   remote: Compressing 
objects:  44% (17/38)   remote: Compressing objects:  47% (18/38)   
remote: Compressing objects:  50% (19/38)   remote: Compressing 
objects:  52% (20/38)   remote: Compressing objects:  55% (21/38)   
remote: Compressing objects:  57% (22/38)   remote: Compressing 
objects:  60% (23/38)   remote: Compressing objects:  63% (24/38)   
remote: Compressing objects:  65% (25/38)   remote: Compressing 
objects:  68% (26/38)   remote: Compressing objects:  71% (27/38)   
remote: Compressing objects:  73% (28/38)   remote: Compressing 
objects:  76% (29/38)   remote: Compressing objects:  78% (30/38)   
remote: Compressing objects:  81% (31/38)   remote: Compressing 
objects:  84% (32/38)   remote: Compressing objects:  86% (33/38)   
remote: Compressing objects:  89% (34/38)   remote: Compressing 
objects:  92% (35/38)   remote: Compressing objects:  94% (36/38)   
remote: Compressing objects:  97% (37/38)   remote: Compressing 
objects: 100% (38/38)   remote: Compressing objects: 100% (38/38), 
done.
Receiving objects:   0% (1/10756)   Receiving objects:   1% (108/10756)   
Receiving objects:   2% (216/10756)   Receiving objects:   3% (323/10756)   
Receiving objects:   4% (431/10756)   Receiving objects:   5% (538/10756)   
Receiving objects:   6% (646/10756)   Receiving objects:   7% (753/10756)   
Receiving objects:   8% (861/10756)   Receiving objects:   9% (969/10756)   
Receiving objects:  10% (1076/10756)   Receiving objects:  11% (1184/10756)   
Receiving objects:  12% (1291/10756)   Receiving objects:  13% (1399/10756)   
Receiving objects:  14% (1506/10756)   Receiving objects:  15% (1614/10756)   
Receiving objects:  16% (1721/10756)   Receiving objects:  17% (1829/10756)   
Receiving objects:  18% (1937/10756)   Receiving objects:  19% (2044/10756)   
Receiving objects:  20% (2152/10756)   Receiving objects:  21% (2259/10756)   
Receiving objects:  22% (2367/10756)   Receiving objects:  23% (2474/10756)   
Receiving objects:  24% 

Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-18 Thread Rajini Sivaram
Hi Ron,

Thanks for the updates. The KIP looks good. A few comments and minor points
below, but feel free to start vote to try and get it into 2.1.0. More
community feedback will be really useful.

1) It may be useful to have a metric of expired connections killed by the
broker. There could be a client implementation that doesn't support
re-authentications, but happens to use the latest version of
SaslAuthenticateRequest. Or cases where re-authentication didn't happen on
time.

2) For `successful-v0-authentication-{rate,total}`, we probably want
version as a tag rather in the name. Not sure if we need four of these
(rate/total with success/failure). Perhaps just success/total is sufficient?

3) For the session lifetime config, we don't need to require a listener or
mechanism prefix. In most cases, we would expect a single config on the
broker-side. For all channel configs, we allow an optional listener prefix,
so we should do the same here.

4) The KIP says connections are terminated on requests not related to
re-authentication (ApiVersionsRequest, SaslHandshakeRequest, and
SaslAuthenticateRequest). We can skip for ApiVersionsRequest for
re-authentication, so that doesn't need to be in the list.

5) The KIP says that the new config will not be dynamically updatable. We
have a very limited set of configs that are dynamically updatable for an
existing listener. And we don't want to add this config to the list since
we don't expect this value to change frequently. But we allow new listeners
to be added dynamically and all configs for the listener can be added
dynamically (with the listener prefix). I think we want to allow that for
this config (i.e. add a new OAuth listener with re-authentication enabled).
We should mention this in the KIP, though in terms of implementation, I
would leave that for a separate JIRA (it doesn't need to be implemented at
the same time).



On Tue, Sep 18, 2018 at 3:06 AM, Ron Dagostino  wrote:

> HI again, Rajini.  Would we ever want the max session time to be different
> across different SASL mechanisms?  I'm wondering, now that we are
> supporting all SASL mechanisms via this KIP, if we still need to prefix
> this config with the "[listener].[mechanism]." prefix.  I've kept the
> prefix in the KIP for now, but it would be easier to just set it once for
> all mechanisms, and I don't see that as being a problem.  Let me know what
> you think.
>
> Ron
>
> On Mon, Sep 17, 2018 at 9:51 PM Ron Dagostino  wrote:
>
> > Hi Rajini.  The KIP is updated.  Aside from a once-over to make sure it
> is
> > all accurate, I think we need to confirm the metrics.  The decision to
> not
> > reject authentications that use tokens with too-long a lifetime allowed
> the
> > metrics to be simpler.  I decided that in addition to tracking these
> > metrics on the broker:
> >
> > failed-reauthentication-{rate,total} and
> > successful-reauthentication-{rate,total}
> >
> > we simply need one more set of broker metrics to track the subset of
> > clients clients that are not upgraded to v2.1.0 and are still using a V0
> > SaslAuthenticateRequest:
> >
> > failed-v0-authentication-{rate,total} and
> > successful-v0-authentication-{rate,total}
> >
> > See the Migration section of the KIP for details of how this would be
> used.
> >
> > I wonder if we need a broker metric documenting the number of "expired"
> > sessions killed by the broker since it would be the same as
> > successful-v0-authentication-total. I've eliminated that from the KIP
> for
> > now.  Thoughts?
> >
> > There is also a client-side metric for re-authentication latency tracking
> > (still unnamed -- do you have a preference?)
> >
> > I think we're close to being able to put this KIP up for a vote.
> >
> > Ron
> >
> >
> > On Mon, Sep 17, 2018 at 2:45 PM Ron Dagostino  wrote:
> >
> >> << >> the KIP
> >> Right, that was a concern I had mentioned in a follow-up email.  Agree
> it
> >> should go alongside errors.
> >>
> >> << have
> >> << >> << >> Ah, ok, I was not aware that this case existed.  Agreed, for
> consistency,
> >> server will always send it back to the client.
> >>
> >> << >> for lifetime]
> >> Since we now agree that the server will send it back, yes, there is no
> >> need for this.
> >>
> >> << >> [sasl.login.refresh.reauthenticate.enable]
> >> I think this might have been more useful when we weren't necessarily
> >> going to support all SASL mechanisms and/or when the broker was not
> going
> >> to advertise the fact that it supported re-authentication.  You are
> >> correct, now that we support it for all SASL mechanisms and we are
> bumping
> >> an API version, I think it is okay to simply enable it wherever both the
> >> client and server meet the required versions.
> >>
> >> << connections.max.reauth.ms]
> >> << >> << >> << >> << >> << >> I was going under the assumption that it would matter, but based on your
> >> pushback I just realized that the same functionality can be implemented
> as
> >> part of token validation if 

[jira] [Created] (KAFKA-7419) Rolling sum for high frequency sream

2018-09-18 Thread Stanislav Bausov (JIRA)
Stanislav Bausov created KAFKA-7419:
---

 Summary: Rolling sum for high frequency sream
 Key: KAFKA-7419
 URL: https://issues.apache.org/jira/browse/KAFKA-7419
 Project: Kafka
  Issue Type: Wish
Reporter: Stanislav Bausov


Have a task to count 24h market volume for high frequency trades stream. And 
there is no solution out of the box. Windowing is not an option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #497

2018-09-18 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
remote: Counting objects: 10756, done.
remote: Compressing objects:   2% (1/38)   remote: Compressing objects: 
  5% (2/38)   remote: Compressing objects:   7% (3/38)   
remote: Compressing objects:  10% (4/38)   remote: Compressing objects: 
 13% (5/38)   remote: Compressing objects:  15% (6/38)   
remote: Compressing objects:  18% (7/38)   remote: Compressing objects: 
 21% (8/38)   remote: Compressing objects:  23% (9/38)   
remote: Compressing objects:  26% (10/38)   remote: Compressing 
objects:  28% (11/38)   remote: Compressing objects:  31% (12/38)   
remote: Compressing objects:  34% (13/38)   remote: Compressing 
objects:  36% (14/38)   remote: Compressing objects:  39% (15/38)   
remote: Compressing objects:  42% (16/38)   remote: Compressing 
objects:  44% (17/38)   remote: Compressing objects:  47% (18/38)   
remote: Compressing objects:  50% (19/38)   remote: Compressing 
objects:  52% (20/38)   remote: Compressing objects:  55% (21/38)   
remote: Compressing objects:  57% (22/38)   remote: Compressing 
objects:  60% (23/38)   remote: Compressing objects:  63% (24/38)   
remote: Compressing objects:  65% (25/38)   remote: Compressing 
objects:  68% (26/38)   remote: Compressing objects:  71% (27/38)   
remote: Compressing objects:  73% (28/38)   remote: Compressing 
objects:  76% (29/38)   remote: Compressing objects:  78% (30/38)   
remote: Compressing objects:  81% (31/38)   remote: Compressing 
objects:  84% (32/38)   remote: Compressing objects:  86% (33/38)   
remote: Compressing objects:  89% (34/38)   remote: Compressing 
objects:  92% (35/38)   remote: Compressing objects:  94% (36/38)   
remote: Compressing objects:  97% (37/38)   remote: Compressing 
objects: 100% (38/38)   remote: Compressing objects: 100% (38/38), 
done.
Receiving objects:   0% (1/10756)   Receiving objects:   1% (108/10756)   
Receiving objects:   2% (216/10756)   Receiving objects:   3% (323/10756)   
Receiving objects:   4% (431/10756)   Receiving objects:   5% (538/10756)   
Receiving objects:   6% (646/10756)   Receiving objects:   7% (753/10756)   
Receiving objects:   8% (861/10756)   Receiving objects:   9% (969/10756)   
Receiving objects:  10% (1076/10756)   Receiving objects:  11% (1184/10756)   
Receiving objects:  12% (1291/10756)   Receiving objects:  13% (1399/10756)   
Receiving objects:  14% (1506/10756)   Receiving objects:  15% (1614/10756)   
Receiving objects:  16% (1721/10756)   Receiving objects:  17% (1829/10756)   
Receiving objects:  18% (1937/10756)   Receiving objects:  19% (2044/10756)   
Receiving objects:  20% (2152/10756)   Receiving objects:  21% (2259/10756)   
Receiving objects:  22% (2367/10756)   Receiving objects:  23% (2474/10756)   
Receiving objects:  24% 

[jira] [Resolved] (KAFKA-7388) An equal sign in a property value causes the broker to fail

2018-09-18 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7388.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

> An equal sign in a property value causes the broker to fail
> ---
>
> Key: KAFKA-7388
> URL: https://issues.apache.org/jira/browse/KAFKA-7388
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andre Araujo
>Priority: Major
> Fix For: 2.1.0
>
>
> I caught this due to a keystore password that had an equal sign in it.
> In this case the code 
> [here|https://github.com/apache/kafka/blob/3cdc78e6bb1f83973a14ce1550fe3874f7348b05/core/src/main/scala/kafka/utils/CommandLineUtils.scala#L63-L76]
>  throws a "Invalid command line properties" error and the broker start is 
> aborted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-18 Thread Eno Thereska
Hi folks,

Could you expand the motivation a bit? When would it make sense to use an
LB in front of Kafka brokers? A client needs to access each broker directly
to consume the data in that broker and cannot be redirected to another
broker. What exact scenario are you seeing that needs this KIP?

Thanks
Eno

On Tue, Sep 18, 2018 at 9:52 AM, Mickael Maison 
wrote:

> Bumping this thread
>
> It's a relatively small change that would help cloud environments with
> load balancers fronting brokers
> On Tue, Sep 11, 2018 at 3:01 PM Edoardo Comar  wrote:
> >
> > Hi all,
> > after some time we updated KIP-302 to reuse the config key introduced by
> > KIP-235, with a different value to avoid conflicts between the two.
> > Also clarified the use of multiple IPs only of the same type (IPv4/IPv6).
> >
> > We'd appreciate a further review and discussion.
> > Thanks!
> > Edo
> >
> >
> > On Fri, 25 May 2018 at 12:36, Edoardo Comar  wrote:
> >
> > > Hi Jonathan,
> > > I'm ok with an expandable enum for the config that could be extended
> > > in the future.
> > > It is marginally better than multiple potentially conflicting config
> > > entries.
> > >
> > > Though as I think the change for KIP-302 is independent from KIP-235
> > > and they do not conflict,
> > > when we'll look back at it post 2.0 we may see if it is more valuable
> > > to shoehorn its config in an expanded enum or not
> > >
> > > thanks,
> > > Edo
> > >
> > > On 24 May 2018 at 16:50, Skrzypek, Jonathan 
> > > wrote:
> > > > Hi,
> > > >
> > > > As Rajini suggested in the thread for KIP 235 (attached), we could
> try
> > > to have an enum that would drive what does the client expands/resolves.
> > > >
> > > > I suggest a client config called client.dns.lookup with different
> values
> > > possible :
> > > >
> > > > - no : no dns lookup
> > > > - hostnames.only : perform dns lookup on both bootstrap.servers and
> > > advertised listeners
> > > > - canonical.hostnames.only : perform dns lookup on both
> > > bootstrap.servers and advertised listeners
> > > > - bootstrap.hostnames.only : perform dns lookup on bootstrap.servers
> > > list and expand it
> > > > - bootstrap.canonical.hostnames.only : perform dns lookup on
> > > bootstrap.servers list and expand it
> > > > - advertised.listeners.hostnames.only : perform dns lookup on
> advertised
> > > listeners
> > > > - advertised.listeners.canonical.hostnames.only : perform dns
> lookup on
> > > advertised listeners
> > > >
> > > > I realize this is a bit heavy but this gives users the ability to
> pick
> > > and choose.
> > > > I didn't include a setting to mix hostnames and canonical hostnames
> as
> > > I'm not sure there would be a valid use case.
> > > >
> > > > Alternatively, to have less possible values, we could have 2
> parameters :
> > > >
> > > > - dns.lookup.type with values : hostname / canonical.host.name
> > > > - dns.lookup.behaviour : bootstrap.servers, advertised.listeners,
> both
> > > >
> > > > Thoughts ?
> > > >
> > > > Jonathan Skrzypek
> > > >
> > > >
> > > > -Original Message-
> > > > From: Edoardo Comar [mailto:edoco...@gmail.com]
> > > > Sent: 17 May 2018 23:50
> > > > To: dev@kafka.apache.org
> > > > Subject: Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS
> > > resolved IP addresses
> > > >
> > > > Hi Jonathan,
> > > >
> > > >> A solution might be to expose to users the choice of using hostname
> or
> > > canonical host name on both sides.
> > > >> Say having one setting that collapses functionalities from both KIPs
> > > (bootstrap expansion + advertised lookup)
> > > >> and an additional parameter that defines how the resolution is
> > > performed, using getCanonicalHostName() or not.
> > > >
> > > > thanks sounds to me *less* simple than independent config options,
> sorry.
> > > >
> > > > I would like to say once again that by itself  KIP-302 only speeds up
> > > > the client behavior that can happen anyway when the client restarts
> > > > multiple times,
> > > > as every time there is no guarantee that - in presence of multiple A
> > > > DNS records - the same IP is returned. Attempting to use additiona
> IPs
> > > > if the first fail just makes client recovery faster.
> > > >
> > > > cheers
> > > > Edo
> > > >
> > > > On 17 May 2018 at 12:12, Skrzypek, Jonathan <
> jonathan.skrzy...@gs.com>
> > > wrote:
> > > >> Yes, makes sense.
> > > >> You mentioned multiple times you see no overlap and no issue with
> your
> > > KIP, and that they solve different use cases.
> > > >>
> > > >> Appreciate you have an existing use case that would work, but we
> need
> > > to make sure this isn't confusing to users and that any combination
> will
> > > always work, across security protocols.
> > > >>
> > > >> A solution might be to expose to users the choice of using hostname
> or
> > > canonical host name on both sides.
> > > >> Say having one setting that collapses functionalities from both KIPs
> > > (bootstrap expansion + advertised lookup) and an additional parameter
> 

kafka dev

2018-09-18 Thread rudy_steiner
hi,
   kafka

Build failed in Jenkins: kafka-trunk-jdk10 #496

2018-09-18 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
remote: Counting objects: 10756, done.
remote: Compressing objects:   2% (1/38)   remote: Compressing objects: 
  5% (2/38)   remote: Compressing objects:   7% (3/38)   
remote: Compressing objects:  10% (4/38)   remote: Compressing objects: 
 13% (5/38)   remote: Compressing objects:  15% (6/38)   
remote: Compressing objects:  18% (7/38)   remote: Compressing objects: 
 21% (8/38)   remote: Compressing objects:  23% (9/38)   
remote: Compressing objects:  26% (10/38)   remote: Compressing 
objects:  28% (11/38)   remote: Compressing objects:  31% (12/38)   
remote: Compressing objects:  34% (13/38)   remote: Compressing 
objects:  36% (14/38)   remote: Compressing objects:  39% (15/38)   
remote: Compressing objects:  42% (16/38)   remote: Compressing 
objects:  44% (17/38)   remote: Compressing objects:  47% (18/38)   
remote: Compressing objects:  50% (19/38)   remote: Compressing 
objects:  52% (20/38)   remote: Compressing objects:  55% (21/38)   
remote: Compressing objects:  57% (22/38)   remote: Compressing 
objects:  60% (23/38)   remote: Compressing objects:  63% (24/38)   
remote: Compressing objects:  65% (25/38)   remote: Compressing 
objects:  68% (26/38)   remote: Compressing objects:  71% (27/38)   
remote: Compressing objects:  73% (28/38)   remote: Compressing 
objects:  76% (29/38)   remote: Compressing objects:  78% (30/38)   
remote: Compressing objects:  81% (31/38)   remote: Compressing 
objects:  84% (32/38)   remote: Compressing objects:  86% (33/38)   
remote: Compressing objects:  89% (34/38)   remote: Compressing 
objects:  92% (35/38)   remote: Compressing objects:  94% (36/38)   
remote: Compressing objects:  97% (37/38)   remote: Compressing 
objects: 100% (38/38)   remote: Compressing objects: 100% (38/38), 
done.
Receiving objects:   0% (1/10756)   Receiving objects:   1% (108/10756)   
Receiving objects:   2% (216/10756)   Receiving objects:   3% (323/10756)   
Receiving objects:   4% (431/10756)   Receiving objects:   5% (538/10756)   
Receiving objects:   6% (646/10756)   Receiving objects:   7% (753/10756)   
Receiving objects:   8% (861/10756)   Receiving objects:   9% (969/10756)   
Receiving objects:  10% (1076/10756)   Receiving objects:  11% (1184/10756)   
Receiving objects:  12% (1291/10756)   Receiving objects:  13% (1399/10756)   
Receiving objects:  14% (1506/10756)   Receiving objects:  15% (1614/10756)   
Receiving objects:  16% (1721/10756)   Receiving objects:  17% (1829/10756)   
Receiving objects:  18% (1937/10756)   Receiving objects:  19% (2044/10756)   
Receiving objects:  20% (2152/10756)   Receiving objects:  21% (2259/10756)   
Receiving objects:  22% (2367/10756)   Receiving objects:  23% (2474/10756)   
Receiving objects:  24% 

Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-18 Thread Mickael Maison
Bumping this thread

It's a relatively small change that would help cloud environments with
load balancers fronting brokers
On Tue, Sep 11, 2018 at 3:01 PM Edoardo Comar  wrote:
>
> Hi all,
> after some time we updated KIP-302 to reuse the config key introduced by
> KIP-235, with a different value to avoid conflicts between the two.
> Also clarified the use of multiple IPs only of the same type (IPv4/IPv6).
>
> We'd appreciate a further review and discussion.
> Thanks!
> Edo
>
>
> On Fri, 25 May 2018 at 12:36, Edoardo Comar  wrote:
>
> > Hi Jonathan,
> > I'm ok with an expandable enum for the config that could be extended
> > in the future.
> > It is marginally better than multiple potentially conflicting config
> > entries.
> >
> > Though as I think the change for KIP-302 is independent from KIP-235
> > and they do not conflict,
> > when we'll look back at it post 2.0 we may see if it is more valuable
> > to shoehorn its config in an expanded enum or not
> >
> > thanks,
> > Edo
> >
> > On 24 May 2018 at 16:50, Skrzypek, Jonathan 
> > wrote:
> > > Hi,
> > >
> > > As Rajini suggested in the thread for KIP 235 (attached), we could try
> > to have an enum that would drive what does the client expands/resolves.
> > >
> > > I suggest a client config called client.dns.lookup with different values
> > possible :
> > >
> > > - no : no dns lookup
> > > - hostnames.only : perform dns lookup on both bootstrap.servers and
> > advertised listeners
> > > - canonical.hostnames.only : perform dns lookup on both
> > bootstrap.servers and advertised listeners
> > > - bootstrap.hostnames.only : perform dns lookup on bootstrap.servers
> > list and expand it
> > > - bootstrap.canonical.hostnames.only : perform dns lookup on
> > bootstrap.servers list and expand it
> > > - advertised.listeners.hostnames.only : perform dns lookup on advertised
> > listeners
> > > - advertised.listeners.canonical.hostnames.only : perform dns lookup on
> > advertised listeners
> > >
> > > I realize this is a bit heavy but this gives users the ability to pick
> > and choose.
> > > I didn't include a setting to mix hostnames and canonical hostnames as
> > I'm not sure there would be a valid use case.
> > >
> > > Alternatively, to have less possible values, we could have 2 parameters :
> > >
> > > - dns.lookup.type with values : hostname / canonical.host.name
> > > - dns.lookup.behaviour : bootstrap.servers, advertised.listeners, both
> > >
> > > Thoughts ?
> > >
> > > Jonathan Skrzypek
> > >
> > >
> > > -Original Message-
> > > From: Edoardo Comar [mailto:edoco...@gmail.com]
> > > Sent: 17 May 2018 23:50
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS
> > resolved IP addresses
> > >
> > > Hi Jonathan,
> > >
> > >> A solution might be to expose to users the choice of using hostname or
> > canonical host name on both sides.
> > >> Say having one setting that collapses functionalities from both KIPs
> > (bootstrap expansion + advertised lookup)
> > >> and an additional parameter that defines how the resolution is
> > performed, using getCanonicalHostName() or not.
> > >
> > > thanks sounds to me *less* simple than independent config options, sorry.
> > >
> > > I would like to say once again that by itself  KIP-302 only speeds up
> > > the client behavior that can happen anyway when the client restarts
> > > multiple times,
> > > as every time there is no guarantee that - in presence of multiple A
> > > DNS records - the same IP is returned. Attempting to use additiona IPs
> > > if the first fail just makes client recovery faster.
> > >
> > > cheers
> > > Edo
> > >
> > > On 17 May 2018 at 12:12, Skrzypek, Jonathan 
> > wrote:
> > >> Yes, makes sense.
> > >> You mentioned multiple times you see no overlap and no issue with your
> > KIP, and that they solve different use cases.
> > >>
> > >> Appreciate you have an existing use case that would work, but we need
> > to make sure this isn't confusing to users and that any combination will
> > always work, across security protocols.
> > >>
> > >> A solution might be to expose to users the choice of using hostname or
> > canonical host name on both sides.
> > >> Say having one setting that collapses functionalities from both KIPs
> > (bootstrap expansion + advertised lookup) and an additional parameter that
> > defines how the resolution is performed, using getCanonicalHostName() or
> > not.
> > >>
> > >> Maybe that gives less flexibility as users wouldn't be able to decide
> > to only perform DNS lookup on bootstrap.servers or on advertised listeners.
> > >> But this would ensure consistency so that a user can decide to use
> > cnames or not (depending on their certificates and Kerberos principals in
> > their environment) and it would work.
> > >>
> > >> Jonathan Skrzypek
> > >>
> > >> -Original Message-
> > >> From: Edoardo Comar [mailto:edoco...@gmail.com]
> > >> Sent: 16 May 2018 21:59
> > >> To: dev@kafka.apache.org
> > >> 

[jira] [Created] (KAFKA-7418) Add '--help' option to all available Kafka CLI commands

2018-09-18 Thread Attila Sasvari (JIRA)
Attila Sasvari created KAFKA-7418:
-

 Summary: Add '--help' option to all available Kafka CLI commands 
 Key: KAFKA-7418
 URL: https://issues.apache.org/jira/browse/KAFKA-7418
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Attila Sasvari


Currently, the '--help' option is not recognized by some Kafka commands . For 
example:
{code}
$ kafka-console-producer --help
help is not a recognized option
{code}

However, the '--help' option is supported by other commands:
{code}
$ kafka-verifiable-producer --help
usage: verifiable-producer [-h] --topic TOPIC --broker-list 
HOST1:PORT1[,HOST2:PORT2[...]] [--max-messages MAX-MESSAGES] [--throughput 
THROUGHPUT] [--acks ACKS]
   [--producer.config CONFIG_FILE] 
[--message-create-time CREATETIME] [--value-prefix VALUE-PREFIX]

...
{code} 

To provide a consistent user experience, it would be nice to add a '--help' 
option to all Kafka commands.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #495

2018-09-18 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
remote: Counting objects: 10756, done.
remote: Compressing objects:   2% (1/38)   remote: Compressing objects: 
  5% (2/38)   remote: Compressing objects:   7% (3/38)   
remote: Compressing objects:  10% (4/38)   remote: Compressing objects: 
 13% (5/38)   remote: Compressing objects:  15% (6/38)   
remote: Compressing objects:  18% (7/38)   remote: Compressing objects: 
 21% (8/38)   remote: Compressing objects:  23% (9/38)   
remote: Compressing objects:  26% (10/38)   remote: Compressing 
objects:  28% (11/38)   remote: Compressing objects:  31% (12/38)   
remote: Compressing objects:  34% (13/38)   remote: Compressing 
objects:  36% (14/38)   remote: Compressing objects:  39% (15/38)   
remote: Compressing objects:  42% (16/38)   remote: Compressing 
objects:  44% (17/38)   remote: Compressing objects:  47% (18/38)   
remote: Compressing objects:  50% (19/38)   remote: Compressing 
objects:  52% (20/38)   remote: Compressing objects:  55% (21/38)   
remote: Compressing objects:  57% (22/38)   remote: Compressing 
objects:  60% (23/38)   remote: Compressing objects:  63% (24/38)   
remote: Compressing objects:  65% (25/38)   remote: Compressing 
objects:  68% (26/38)   remote: Compressing objects:  71% (27/38)   
remote: Compressing objects:  73% (28/38)   remote: Compressing 
objects:  76% (29/38)   remote: Compressing objects:  78% (30/38)   
remote: Compressing objects:  81% (31/38)   remote: Compressing 
objects:  84% (32/38)   remote: Compressing objects:  86% (33/38)   
remote: Compressing objects:  89% (34/38)   remote: Compressing 
objects:  92% (35/38)   remote: Compressing objects:  94% (36/38)   
remote: Compressing objects:  97% (37/38)   remote: Compressing 
objects: 100% (38/38)   remote: Compressing objects: 100% (38/38), 
done.
Receiving objects:   0% (1/10756)   Receiving objects:   1% (108/10756)   
Receiving objects:   2% (216/10756)   Receiving objects:   3% (323/10756)   
Receiving objects:   4% (431/10756)   Receiving objects:   5% (538/10756)   
Receiving objects:   6% (646/10756)   Receiving objects:   7% (753/10756)   
Receiving objects:   8% (861/10756)   Receiving objects:   9% (969/10756)   
Receiving objects:  10% (1076/10756)   Receiving objects:  11% (1184/10756)   
Receiving objects:  12% (1291/10756)   Receiving objects:  13% (1399/10756)   
Receiving objects:  14% (1506/10756)   Receiving objects:  15% (1614/10756)   
Receiving objects:  16% (1721/10756)   Receiving objects:  17% (1829/10756)   
Receiving objects:  18% (1937/10756)   Receiving objects:  19% (2044/10756)   
Receiving objects:  20% (2152/10756)   Receiving objects:  21% (2259/10756)   
Receiving objects:  22% (2367/10756)   Receiving objects:  23% (2474/10756)   
Receiving objects:  24% 

[jira] [Resolved] (KAFKA-7332) Improve error message when trying to produce message without key for compacted topic

2018-09-18 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-7332.
-
Resolution: Fixed

> Improve error message when trying to produce message without key for 
> compacted topic
> 
>
> Key: KAFKA-7332
> URL: https://issues.apache.org/jira/browse/KAFKA-7332
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 1.1.0
>Reporter: Patrik Kleindl
>Priority: Trivial
> Fix For: 2.1.0
>
>
> Goal:
> Return a specific error message like e.g. "Message without a key is not valid 
> for a compacted topic" when trying to produce such a message instead of a 
> CorruptRecordException.
>  
> > Yesterday we had the following exception:
> > 
> > Exception thrown when sending a message with key='null' and payload='...'
> > to topic sometopic:: org.apache.kafka.common.errors.CorruptRecordException:
> > This message has failed its CRC checksum, exceeds the valid size, or is
> > otherwise corrupt.
> > 
> > The cause was identified with the help of
> > 
> >[https://stackoverflow.com/questions/49098274/kafka-stream-get-corruptrecordexception]
> > 
> > Is it possible / would it makes sense to open an issue to improve the error
> > message for this case?
> > A simple "Message without a key is not valid for a compacted topic" would
> > suffice and point a user  in the right direction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-371: Add a configuration to build custom SSL principal name

2018-09-18 Thread Manikumar
Hi All,

We would like to go with Option 1, which adds a new configuration parameter
pair of the form:
ssl.principal.mapping.pattern, ssl.principal.mapping.value. This will
fulfill the requirement for most of the common use cases.

We would like to include the KIP in the upcoming release. If there no
concerns, would like to start vote on this KIP.

Thanks,

On Fri, Sep 14, 2018 at 11:32 PM Priyank Shah  wrote:

> Definitely a helpful change. +1 for Option 2.
>
> On 9/14/18, 10:52 AM, "Manikumar"  wrote:
>
> Hi Eno,
>
> Thanks for the review.
>
> Most often users want to extract one of the field (eg. CN). CN is the
> commonly used field.
> For this simple change, users need to build and maintain the custom
> principal builder class
> and also package and deploy to the all brokers. Having configurable
> rules
> will be useful.
>
> Proposed mapping rules works on string representation of the X.500
> distinguished name(RFC2253 format) [1].
> Mapping rules can use the attribute types keywords defined in RFC 2253
> (CN,
> L, ST, O, OU, C, STREET, DC, UID).
>
> Any additional/custom attribute types are emitted as OIDs. To emit
> additional attribute type keys,
> we need to have OID -> attribute type keyword String mapping.[2]
>
> For example, String representation of X500Principal("CN=Duke,
> OU=JavaSoft,
> O=Sun Microsystems, C=US, EMAILADDRESS=t...@test.com")
> will be "CN=Duke,OU=JavaSoft,O=Sun
> Microsystems,C=US,1.2.840.113549.1.9.1=#160d7465737440746573742e636f6d"
>
> If we have the OID - key mapping ("1.2.840.113549.1.9.1",
> "emailAddress"),
> the string will be
> "CN=Duke,OU=JavaSoft,O=Sun Microsystems,C=US,emailAddress=
> t...@test.com"
>
> Since we are not passing this mapping, we can not extarct using
> additional
> attribute type keyword string.
> If the user want to extract additional attribute keys, we need to write
> custom principal builder class.
>
> Hope the above helps. Update the KIP.
>
> [1]
>
> https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/X500Principal.html#getName(java.lang.String)
>
> [2]
>
> https://docs.oracle.com/javase/7/docs/api/javax/security/auth/x500/X500Principal.html#getName(java.lang.String,%20java.util.Map)
>
> Thanks
>
> On Fri, Sep 14, 2018 at 7:44 PM Eno Thereska 
> wrote:
>
> > Manikumar, thanks. If I understand the KIP motivation right, this is
> > currently already possible, but you want to have an easier way of
> doing it,
> > right? The motivation would be stronger if we had 2-3 common cases
> (from
> > experience) where the suggested pattern would solve them, and
> perhaps 1-2
> > cases where the pattern would not be adequate and we'd have to fall
> back to
> > the existing builder class.
> >
> > Thanks
> > Eno
> >
> > On Fri, Sep 14, 2018 at 12:36 PM, Manikumar <
> manikumar.re...@gmail.com>
> > wrote:
> >
> > > Hi All,
> > >
> > > We'd appreciate any thoughts / comments on the proposed options for
> > > customizing SSL principal names.
> > > We are happy to discuss any alternative approaches/suggestions.
> > >
> > > Thanks,
> > >
> > > On Sat, Sep 8, 2018 at 12:45 AM Manikumar <
> manikumar.re...@gmail.com>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I have created a KIP that proposes couple of options for building
> > custom
> > > > SSL principal names.
> > > >
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
> > > >
> > > > Please take a look.
> > > >
> > > > Thanks,
> > > > Manikumar
> > > >
> > >
> >
>
>
>


[jira] [Resolved] (KAFKA-7322) Fix race condition between log cleaner thread and log retention thread when topic cleanup policy is updated

2018-09-18 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-7322.
-
Resolution: Fixed

> Fix race condition between log cleaner thread and log retention thread when 
> topic cleanup policy is updated
> ---
>
> Key: KAFKA-7322
> URL: https://issues.apache.org/jira/browse/KAFKA-7322
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: xiongqi wu
>Assignee: xiongqi wu
>Priority: Major
> Fix For: 2.1.0
>
>
> The deletion thread will grab the log.lock when it tries to rename log 
> segment and schedule for actual deletion.
> The compaction thread only grabs the log.lock when it tries to replace the 
> original segments with the cleaned segment. The compaction thread doesn't 
> grab the log when it reads records from the original segments to build 
> offsetmap and new segments. As a result, if both deletion and compaction 
> threads work on the same log partition. We have a race condition. 
> This race happens when the topic cleanup policy is updated on the fly.  
> One case to hit this race condition:
> 1: topic clean up policy is "compact" initially 
> 2: log cleaner (compaction) thread picks up the partition for compaction and 
> still in progress
> 3: the topic clean up policy has been updated to "deletion"
> 4: retention thread pick up the topic partition and delete some old segments.
> 5: log cleaner thread reads from the deleted log and raise an IO exception. 
>  
> The proposed solution is to use "inprogress" map that cleaner manager has to 
> protect such a race.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)