Re: [DISCUSS] KIP-342 Add Customizable SASL extensions to OAuthBearer authentication

2018-07-22 Thread Stanislav Kozlovski
Hey Ron and Rajini,

Here are my thoughts:
Regarding separators in SaslExtensions - Agreed, that was a bad move.
Should definitely not be a concern of CallbackHandler and LoginModule
implementors.
SaslExtensions interface - Wouldn't implementing it as an interface mean
that users will have to make sure they're passing in an unmodifiable map
themselves. I believe it would be better if we enforced that through class
constructors instead.
SaslExtensions#map() - I'd also prefer this. The reason I went with
`extensionValue` and `extensionNames` was because I figured it made sense
to have `ScramExtensions` extend `SaslExtensions` and therefore have their
API be similar. In the end, do you think that it is worth it to have
`ScramExtensions` extend `SaslExtensions`?
@Ron, could you point me to the SASL OAuth mechanism specific regular
expressions for keys/values you mentioned are in RFC 7628 (
https://tools.ietf.org/html/rfc7628) ? I could not find any while
originally implementing this.

Best,
Stanislav

On Sun, Jul 22, 2018 at 6:46 PM Ron Dagostino  wrote:

> Hi again, Rajini and Stanislav.  I wonder if making SaslExtensions an
> interface rather than a class might be a good solution.  For example:
>
> public interface SaslExtensions {
>/**
> * @return an immutable map view of the SASL extensions
> */
> Map map();
> }
>
> This solves the issue of lack of clarity on immutability, and it also
> eliminates copying, like this:
>
> SaslExtensions myMethod() {
> Map myRetval = getUnmodifiableSaslExtensionsMap();
> return new SaslExtensions() {
> public Map map() {
> return myRetval;
> }
> }
> }
>
> Alternatively, we could do it like this:
>
> /**
>  * Supplier that returns immutable map view of SASL Extensions
>  */
> public interface SaslExtensions extends Supplier> {
> // empty
> }
>
> The we could simply return the instance like this, again without copying:
>
> SaslExtensions myMethod() {
> Map myRetval = getUnmodifiableSaslExtensionsMap();
> return () -> myRetval;
> }
>
> I think the main reason for making SaslExtensions part of the public
> interface is to avoid adding a Map to the Subject's public credentials.
> Making SaslExtensions an interface meets that requirement and then allows
> us to be free to implement whatever we want internally.
>
> Thoughts?
>
> Ron
>
> On Sun, Jul 22, 2018 at 12:45 PM Ron Dagostino  wrote:
>
> > Hi Rajini.  The SaslServer is going to have to validate the extensions,
> > too, but I’m okay with keeping the validation logic elsewhere as long as
> it
> > can be reused in both the client and the secret.
> >
> > I strongly prefer exposing a map() method as opposed to extensionNames()
> > and extensionValue(String) methods. It is a smaller API (2 methods
> instead
> > of 1), and it gives clients of the API full map-related functionality
> > (there’s a lot of support for dealing with maps in a variety of ways).
> >
> > Regardless of whether we go with a map() method or extensionNames() and
> > extensionValue(String) methods, the semantics of mutability need to be
> > clear.  I think either way we should never share a map that anyone else
> > could possibly mutate — either a map that someone gives us or a map that
> we
> > might expose.
> >
> > Thoughts?
> >
> > Ron
> >
> > > On Jul 22, 2018, at 11:23 AM, Rajini Sivaram 
> > wrote:
> > >
> > > Hmm I think we need a much simpler SaslExtensions class if we are
> > > making it part of the public API.
> > >
> > > 1. I don't see the point of including separator anywhere in
> > SaslExtensions.
> > > Extensions provide a map and we propagate the map from client to server
> > > using the protocol associated with the mechanism in use. The separator
> is
> > > not configurable and should not be a concern of the implementor of
> > > SaslExtensionsCallback interface that provides an instance of
> > SaslExtensions
> > > .
> > >
> > > 2. I agree with Ron that we need mechanism-specific validation of the
> > > values from SaslExtensions. But I think we could do the validation in
> the
> > > appropriate `SaslClient` implementation of that mechanism.
> > >
> > > I think we could just have a very simple extensions class and move
> > > everything else to appropriate internal classes of the mechanisms using
> > > extensions. What do you think?
> > >
> > > public class SaslExtensions {
> > >private final Map extensionMap;
> > >
> > >public SaslExtensions(Map extensionMap) {
> > >this.extensionMap = extensionMap;
> > >}
> > >
> > >public String extensionValue(String name) {
> > >return extensionMap.get(name);
> > >}
> > >
> > >public Set extensionNames() {
> > >return extensionMap.keySet();
> > >}
> > > }
> > >
> > >
> > >
> > >> On Sat, Jul 21, 2018 at 9:01 PM, Ron Dagostino 
> > wrote:
> > >>
> > >> Hi Stanislav and Rajini.  If SaslExtensions is going to part of the
> > public
> > >> API, then it occurred to me that one of the requiremen

Build failed in Jenkins: kafka-trunk-jdk8 #2833

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:  14% (1/7)   remote: Compressing objects:  
28% (2/7)   remote: Compressing objects:  42% (3/7)   remote: 
Compressing objects:  57% (4/7)   remote: Compressing objects:  71% 
(5/7)   remote: Compressing objects:  85% (6/7)   remote: 
Compressing objects: 100% (7/7)   remote: Compressing objects: 100% 
(7/7), done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receiving objects:  51% (2927/5738)   
Receiving objects:  52% (2984/5738)   Receiving objects:  53% (3042/5738)   
Receiving objects:  54% (3099/5738)   Receiving objects:  55% (3156/5738)   
Receiving objects:  56% (3214/5738)   Receiving objects:  57% (3271/5738)   
Receiving objects:  58% (3329/5738)   Receiving objects:  59% (3386/5738)   
Receiving objects:  60% (3443/5738)   Receiving objects:  61% (3501/5738)   
Receiving objects:  62% (3558/5738)   Receiving objects:  63% (3615/5738)   
Receiving objects:  64% (3673/5738)   Rec

Build failed in Jenkins: kafka-trunk-jdk8 #2832

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:  14% (1/7)   remote: Compressing objects:  
28% (2/7)   remote: Compressing objects:  42% (3/7)   remote: 
Compressing objects:  57% (4/7)   remote: Compressing objects:  71% 
(5/7)   remote: Compressing objects:  85% (6/7)   remote: 
Compressing objects: 100% (7/7)   remote: Compressing objects: 100% 
(7/7), done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receiving objects:  51% (2927/5738)   
Receiving objects:  52% (2984/5738)   Receiving objects:  53% (3042/5738)   
Receiving objects:  54% (3099/5738)   Receiving objects:  55% (3156/5738)   
Receiving objects:  56% (3214/5738)   Receiving objects:  57% (3271/5738)   
Receiving objects:  58% (3329/5738)   Receiving objects:  59% (3386/5738)   
Receiving objects:  60% (3443/5738)   Receiving objects:  61% (3501/5738)   
Receiving objects:  62% (3558/5738)   Receiving objects:  63% (3615/5738)   
Receiving objects:  64% (3673/5738)   Rec

Build failed in Jenkins: kafka-trunk-jdk8 #2831

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:  14% (1/7)   remote: Compressing objects:  
28% (2/7)   remote: Compressing objects:  42% (3/7)   remote: 
Compressing objects:  57% (4/7)   remote: Compressing objects:  71% 
(5/7)   remote: Compressing objects:  85% (6/7)   remote: 
Compressing objects: 100% (7/7)   remote: Compressing objects: 100% 
(7/7), done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receiving objects:  51% (2927/5738)   
Receiving objects:  52% (2984/5738)   Receiving objects:  53% (3042/5738)   
Receiving objects:  54% (3099/5738)   Receiving objects:  55% (3156/5738)   
Receiving objects:  56% (3214/5738)   Receiving objects:  57% (3271/5738)   
Receiving objects:  58% (3329/5738)   Receiving objects:  59% (3386/5738)   
Receiving objects:  60% (3443/5738)   Receiving objects:  61% (3501/5738)   
Receiving objects:  62% (3558/5738)   Receiving objects:  63% (3615/5738)   
Receiving objects:  64% (3673/5738)   Rec

Build failed in Jenkins: kafka-trunk-jdk8 #2830

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:  14% (1/7)   remote: Compressing objects:  
28% (2/7)   remote: Compressing objects:  42% (3/7)   remote: 
Compressing objects:  57% (4/7)   remote: Compressing objects:  71% 
(5/7)   remote: Compressing objects:  85% (6/7)   remote: 
Compressing objects: 100% (7/7)   remote: Compressing objects: 100% 
(7/7), done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receiving objects:  51% (2927/5738)   
Receiving objects:  52% (2984/5738)   Receiving objects:  53% (3042/5738)   
Receiving objects:  54% (3099/5738)   Receiving objects:  55% (3156/5738)   
Receiving objects:  56% (3214/5738)   Receiving objects:  57% (3271/5738)   
Receiving objects:  58% (3329/5738)   Receiving objects:  59% (3386/5738)   
Receiving objects:  60% (3443/5738)   Receiving objects:  61% (3501/5738)   
Receiving objects:  62% (3558/5738)   Receiving objects:  63% (3615/5738)   
Receiving objects:  64% (3673/5738)   Rec

Re: [DISCUSS] KIP-342 Add Customizable SASL extensions to OAuthBearer authentication

2018-07-22 Thread Ron Dagostino
Hi again, Rajini and Stanislav.  I wonder if making SaslExtensions an
interface rather than a class might be a good solution.  For example:

public interface SaslExtensions {
   /**
* @return an immutable map view of the SASL extensions
*/
Map map();
}

This solves the issue of lack of clarity on immutability, and it also
eliminates copying, like this:

SaslExtensions myMethod() {
Map myRetval = getUnmodifiableSaslExtensionsMap();
return new SaslExtensions() {
public Map map() {
return myRetval;
}
}
}

Alternatively, we could do it like this:

/**
 * Supplier that returns immutable map view of SASL Extensions
 */
public interface SaslExtensions extends Supplier> {
// empty
}

The we could simply return the instance like this, again without copying:

SaslExtensions myMethod() {
Map myRetval = getUnmodifiableSaslExtensionsMap();
return () -> myRetval;
}

I think the main reason for making SaslExtensions part of the public
interface is to avoid adding a Map to the Subject's public credentials.
Making SaslExtensions an interface meets that requirement and then allows
us to be free to implement whatever we want internally.

Thoughts?

Ron

On Sun, Jul 22, 2018 at 12:45 PM Ron Dagostino  wrote:

> Hi Rajini.  The SaslServer is going to have to validate the extensions,
> too, but I’m okay with keeping the validation logic elsewhere as long as it
> can be reused in both the client and the secret.
>
> I strongly prefer exposing a map() method as opposed to extensionNames()
> and extensionValue(String) methods. It is a smaller API (2 methods instead
> of 1), and it gives clients of the API full map-related functionality
> (there’s a lot of support for dealing with maps in a variety of ways).
>
> Regardless of whether we go with a map() method or extensionNames() and
> extensionValue(String) methods, the semantics of mutability need to be
> clear.  I think either way we should never share a map that anyone else
> could possibly mutate — either a map that someone gives us or a map that we
> might expose.
>
> Thoughts?
>
> Ron
>
> > On Jul 22, 2018, at 11:23 AM, Rajini Sivaram 
> wrote:
> >
> > Hmm I think we need a much simpler SaslExtensions class if we are
> > making it part of the public API.
> >
> > 1. I don't see the point of including separator anywhere in
> SaslExtensions.
> > Extensions provide a map and we propagate the map from client to server
> > using the protocol associated with the mechanism in use. The separator is
> > not configurable and should not be a concern of the implementor of
> > SaslExtensionsCallback interface that provides an instance of
> SaslExtensions
> > .
> >
> > 2. I agree with Ron that we need mechanism-specific validation of the
> > values from SaslExtensions. But I think we could do the validation in the
> > appropriate `SaslClient` implementation of that mechanism.
> >
> > I think we could just have a very simple extensions class and move
> > everything else to appropriate internal classes of the mechanisms using
> > extensions. What do you think?
> >
> > public class SaslExtensions {
> >private final Map extensionMap;
> >
> >public SaslExtensions(Map extensionMap) {
> >this.extensionMap = extensionMap;
> >}
> >
> >public String extensionValue(String name) {
> >return extensionMap.get(name);
> >}
> >
> >public Set extensionNames() {
> >return extensionMap.keySet();
> >}
> > }
> >
> >
> >
> >> On Sat, Jul 21, 2018 at 9:01 PM, Ron Dagostino 
> wrote:
> >>
> >> Hi Stanislav and Rajini.  If SaslExtensions is going to part of the
> public
> >> API, then it occurred to me that one of the requirements of all SASL
> >> extensions is that the keys and values need to match mechanism-specific
> >> regular expressions.  For example, RFC 5802 (
> >> https://tools.ietf.org/html/rfc5802) specifies the regular expressions
> for
> >> the SCRAM-specific SASL mechanisms, and RFC 7628 (
> >> https://tools.ietf.org/html/rfc7628) specifies different regular
> >> expressions for the OAUTHBEARER SASL mechanism.  I am thinking the
> >> SaslExtensions class should probably provide a way to make sure the keys
> >> and values match the appropriate regular expressions.  What do you
> think of
> >> something along the lines of the below definition for the SaslExtensions
> >> class?  It is missing Javadoc and toString()/hashCode()/equals()
> methods,
> >> of course, but aside from that, do you think this is sufficient and
> >> appropriate?
> >>
> >> Ron
> >>
> >> public class SaslExtensions {
> >>private final Map extensionsMap;
> >>
> >>public SaslExtensions(String mapStr, String keyValueSeparator, String
> >> elementSeparator,
> >>Pattern saslNameRegexPattern, Pattern saslValueRegexPattern)
> {
> >>this(Utils.parseMap(mapStr, keyValueSeparator, elementSeparator),
> >> saslNameRegexPattern, saslValueRegexPattern);
> >>}
> >>
> >>public SaslExtensions(Map ex

Build failed in Jenkins: kafka-trunk-jdk8 #2829

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:  14% (1/7)   remote: Compressing objects:  
28% (2/7)   remote: Compressing objects:  42% (3/7)   remote: 
Compressing objects:  57% (4/7)   remote: Compressing objects:  71% 
(5/7)   remote: Compressing objects:  85% (6/7)   remote: 
Compressing objects: 100% (7/7)   remote: Compressing objects: 100% 
(7/7), done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receiving objects:  51% (2927/5738)   
Receiving objects:  52% (2984/5738)   Receiving objects:  53% (3042/5738)   
Receiving objects:  54% (3099/5738)   Receiving objects:  55% (3156/5738)   
Receiving objects:  56% (3214/5738)   Receiving objects:  57% (3271/5738)   
Receiving objects:  58% (3329/5738)   Receiving objects:  59% (3386/5738)   
Receiving objects:  60% (3443/5738)   Receiving objects:  61% (3501/5738)   
Receiving objects:  62% (3558/5738)   Receiving objects:  63% (3615/5738)   
Receiving objects:  64% (3673/5738)   Rec

Processor API StateStore and Recovery with State Machines question.

2018-07-22 Thread Adam Bellemare
Hi Folks

I have a quick question about a scenario that I would appreciate some
insight on. This is related to a KIP I am working on, but I wanted to break
this out into its own scenario to reach a wider audience. In this scenario,
I am using builder.internalTopologyBuilder to create the following within
the internals of Kafka Streaming:

1) Internal Topic Source (builder.internalTopologyBuilder.addSource(...) )

2) ProcessorSupplier with StateStore, Changelogging enabled. For the
purpose of this question, this processor is a very simple state machine.
All it does is alternately block each other event, of a given key, from
processing. For instance:
(A,1)
(A,2)
(A,3)
It would block the propagation of (A,2). The state of the system after
processing each event is:
blockNext = true
blockNext = false
blockNext = true

The expecation is that this component would always block the same event, in
any failure mode and subsequent recovery (ie: ALWAYS blocks (A,2), but not
(A,1) or (A,3) ). In other words, it would maintain perfect state in
accordance with the offsets of the upstream and downstream elements.

3) The third component is a KTable with a Materialized StateStore where I
want to sink the remaining events. It is also backed by a change log. The
events arriving would be:
(A,1)
(A,3)

The components are ordered as:
1 -> 2 -> 3


Note that I am keeping the state machine in a separate state store. My main
questions are:

1) Will this workflow be consistent in all manners of failure? For example,
are the state stores change logs fully written to internal topics before
the offset is updated for the consumer in #1?

2) Is it possible that one State Store with changelogging will be logged to
Kafka safely (say component #3) but the other (#2) will not be, prior to a
sudden, hard termination of the node?

3) Is the alternate possible, where #2 is backed up to its Kafka Topic but
#3 is not? Does the ordering of the topology matter in this case?

4) Is it possible that the state store #2 is updated and logged, but the
source topic (#1) offset is not updated?

In all of these cases, my main concern is keeping the state and the
expected output consistent. For any failure mode, will I be able to recover
to a fully consistent state given the requirements of the state machine in
#2?

Though this is a trivial example, I am not certain about the dynamics
between maintaining state, recovering from internal changelog topics, and
the order in which all of these things apply. Any words of wisdom or
explanations would be helpful here. I have been looking through the code
but I wanted to get second opinions on this.



Thanks,

Adam


Build failed in Jenkins: kafka-trunk-jdk8 #2828

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:  14% (1/7)   remote: Compressing objects:  
28% (2/7)   remote: Compressing objects:  42% (3/7)   remote: 
Compressing objects:  57% (4/7)   remote: Compressing objects:  71% 
(5/7)   remote: Compressing objects:  85% (6/7)   remote: 
Compressing objects: 100% (7/7)   remote: Compressing objects: 100% 
(7/7), done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receiving objects:  51% (2927/5738)   
Receiving objects:  52% (2984/5738)   Receiving objects:  53% (3042/5738)   
Receiving objects:  54% (3099/5738)   Receiving objects:  55% (3156/5738)   
Receiving objects:  56% (3214/5738)   Receiving objects:  57% (3271/5738)   
Receiving objects:  58% (3329/5738)   Receiving objects:  59% (3386/5738)   
Receiving objects:  60% (3443/5738)   Receiving objects:  61% (3501/5738)   
Receiving objects:  62% (3558/5738)   Receiving objects:  63% (3615/5738)   
Receiving objects:  64% (3673/5738)   Rec

Build failed in Jenkins: kafka-2.0-jdk8 #86

2018-07-22 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Close ZooKeeperClient if waitUntilConnected fails during

--
[...truncated 2.52 MB...]
org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideNonPrefixedCustomConfigsWithPrefixedConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideNonPrefixedCustomConfigsWithPrefixedConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyOptimizationWhenNotExplicitlyAddedToConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyOptimizationWhenNotExplicitlyAddedToConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetGlobalConsumerConfigs 
STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetGlobalConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
testGetRestoreConsumerConfigsWithRestoreConsumerOverridenPrefix STARTED

org.apache.kafka.streams.StreamsConfigTest > 
testGetRestoreConsumerConfigsWithRestoreConsumerOverridenPrefix PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfMaxInflightRequestsGreatherThanFiveIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfMaxInflightRequestsGreatherThanFiveIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedGlobalConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedGlobalConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedCo

[jira] [Created] (KAFKA-7191) Add sensors for NumOfflineThread, FetchRequestRate and FetchRequestLocalTime in the follower broker

2018-07-22 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-7191:
---

 Summary: Add sensors for NumOfflineThread, FetchRequestRate and 
FetchRequestLocalTime in the follower broker
 Key: KAFKA-7191
 URL: https://issues.apache.org/jira/browse/KAFKA-7191
 Project: Kafka
  Issue Type: Improvement
Reporter: Dong Lin
Assignee: Dong Lin


It will be useful to have NumOfflineThread to monitor the number of offline 
thread (e.g. ReplicaFetcherThread) in the broker so that system admin can be 
alerted when there is offline thread.

And we also need metrics for FetchRequestRate and FetchRequestLocalTime in the 
follower broker to monitor and debug the data replication performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2827

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:  14% (1/7)   remote: Compressing objects:  
28% (2/7)   remote: Compressing objects:  42% (3/7)   remote: 
Compressing objects:  57% (4/7)   remote: Compressing objects:  71% 
(5/7)   remote: Compressing objects:  85% (6/7)   remote: 
Compressing objects: 100% (7/7)   remote: Compressing objects: 100% 
(7/7), done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receiving objects:  51% (2927/5738)   
Receiving objects:  52% (2984/5738)   Receiving objects:  53% (3042/5738)   
Receiving objects:  54% (3099/5738)   Receiving objects:  55% (3156/5738)   
Receiving objects:  56% (3214/5738)   Receiving objects:  57% (3271/5738)   
Receiving objects:  58% (3329/5738)   Receiving objects:  59% (3386/5738)   
Receiving objects:  60% (3443/5738)   Receiving objects:  61% (3501/5738)   
Receiving objects:  62% (3558/5738)   Receiving objects:  63% (3615/5738)   
Receiving objects:  64% (3673/5738)   Rec

Build failed in Jenkins: kafka-trunk-jdk8 #2826

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:   5% (1/17)   remote: Compressing objects: 
 11% (2/17)   remote: Compressing objects:  17% (3/17)   
remote: Compressing objects:  23% (4/17)   remote: Compressing objects: 
 29% (5/17)   remote: Compressing objects:  35% (6/17)   
remote: Compressing objects:  41% (7/17)   remote: Compressing objects: 
 47% (8/17)   remote: Compressing objects:  52% (9/17)   
remote: Compressing objects:  58% (10/17)   remote: Compressing 
objects:  64% (11/17)   remote: Compressing objects:  70% (12/17)   
remote: Compressing objects:  76% (13/17)   remote: Compressing 
objects:  82% (14/17)   remote: Compressing objects:  88% (15/17)   
remote: Compressing objects:  94% (16/17)   remote: Compressing 
objects: 100% (17/17)   remote: Compressing objects: 100% (17/17), 
done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receivin

Build failed in Jenkins: kafka-trunk-jdk8 #2825

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:   5% (1/17)   remote: Compressing objects: 
 11% (2/17)   remote: Compressing objects:  17% (3/17)   
remote: Compressing objects:  23% (4/17)   remote: Compressing objects: 
 29% (5/17)   remote: Compressing objects:  35% (6/17)   
remote: Compressing objects:  41% (7/17)   remote: Compressing objects: 
 47% (8/17)   remote: Compressing objects:  52% (9/17)   
remote: Compressing objects:  58% (10/17)   remote: Compressing 
objects:  64% (11/17)   remote: Compressing objects:  70% (12/17)   
remote: Compressing objects:  76% (13/17)   remote: Compressing 
objects:  82% (14/17)   remote: Compressing objects:  88% (15/17)   
remote: Compressing objects:  94% (16/17)   remote: Compressing 
objects: 100% (17/17)   remote: Compressing objects: 100% (17/17), 
done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receivin

Re: [Discuss] KIP-321: Add method to get TopicNameExtractor in TopologyDescription

2018-07-22 Thread Matthias J. Sax
Works for me.

On 7/22/18 9:48 AM, Guozhang Wang wrote:
> I think I can be convinced with deprecating topics() to keep API minimal.
> 
> About renaming the others with `XXNames()`: well, to me it feels still not
> very worthy since although it is not a big burden, it seems also not a big
> "return" if we name the newly added function `topicSet()`.
> 
> 
> Guozhang
> 
> 
> On Fri, Jul 20, 2018 at 7:38 PM, Nishanth Pradeep 
> wrote:
> 
>> I definitely agree with you on deprecating topics().
>>
>> I also think changing the method names for consistency is reasonable, since
>> there is no functionality change. Although, I can be convinced either way
>> on this one.
>>
>> Best,
>> Nishanth Pradeep
>> On Fri, Jul 20, 2018 at 12:15 PM Matthias J. Sax 
>> wrote:
>>
>>> I would still deprecate existing `topics()` method. If users need a
>>> String, they can call `topicSet().toString()`.
>>>
>>> It's just a personal preference, because I believe it's good to keep the
>>> API "minimal".
>>>
>>> About renaming the other methods: I thinks it's a very small burden to
>>> deprecate the existing methods and add them with new names. Also just my
>>> 2 cents.
>>>
>>> Would be good to see what others think.
>>>
>>>
>>> -Matthias
>>>
>>> On 7/19/18 6:20 PM, Nishanth Pradeep wrote:
 Understood, Guozhang.

 Thanks for the help, everyone! I have updated the KIP. Let me know if
>> you
 any other thoughts or suggestions.

 Best,
 Nishanth Pradeep

 On Thu, Jul 19, 2018 at 7:33 PM Guozhang Wang 
>>> wrote:

> I see.
>
> Well, I think if we add a new function like topicSet() it is less
>>> needed to
> deprecate topics() as it returns "{topic1, topic2, ..}" which is sort
>> of
> non-overlapping in usage with the new API.
>
>
> Guozhang
>
> On Thu, Jul 19, 2018 at 5:31 PM, Nishanth Pradeep <
>>> nishanth...@gmail.com>
> wrote:
>
>> That is what I meant. I will add topicSet() instead of changing the
>> signature of topics() for compatibility reasons. But should we not
>> add
>>> a
>> @deprecated flag for topics() or do you want to keep it around for
>> the
> long
>> run?
>>
>> On Thu, Jul 19, 2018 at 7:27 PM Guozhang Wang 
> wrote:
>>
>>> We cannot change the signature of the function named "topics" from
>> "String"
>>> to "Set", as Matthias mentioned it is a compatibility
>> breaking
>>> change.
>>>
>>> That's why I was proposing add a new function like "Set
>>> topicSet()", while keeping "String topics()" as is.
>>>
>>> Guozhang
>>>
>>> On Thu, Jul 19, 2018 at 5:22 PM, Nishanth Pradeep <
> nishanth...@gmail.com
>>>
>>> wrote:
>>>
 Right, adding topicNames() instead of changing the return type of
>>> topics()
 in order preserve backwards compatibility is a good idea. But is it
> not
 better to depreciate topics() because it would be redundant? In our
>> case,
 it would only be calling topicNames/topicSet#toString().

 I still agree that perhaps changing the other API's might be
>> unnecessary
 since it's only a name change.

 I have made the change to the KIP to only add, not change,
> preexisting
 APIs. But where do we stand on deprecating topics()?

 Best,
 Nishanth Pradeep

 On Thu, Jul 19, 2018 at 1:44 PM Guozhang Wang 
>>> wrote:

> Personally I'd prefer to keep the deprecation-related changes as
>> small
>>> as
> possible unless they are really necessary, and hence I'd prefer to
>> just
 add
>
> List topicList()  /* or Set topicSet() */
>
> in addition to topicPattern to Source, in addition to
 `topicNameExtractor`
> to Sink, and leaving the current APIs as-is.
>
> Guozhang
>
> On Thu, Jul 19, 2018 at 10:36 AM, Matthias J. Sax <
>>> matth...@confluent.io
>
> wrote:
>
>> Thanks for updating the KIP.
>>
>> The current `Source` interface has a method `String topics()`
> atm.
 Thus,
>> we cannot just add `Set Source#topics()` because this
> would
>> replace the existing method and would be an incompatible change.
>>
>> I think, we should deprecate `String topics()` and add a method
>> with
>> different name:
>>
>> `Set Source#topicNames()`
>>
>> The method name `topicNames` is more appropriate anyway, as we
>>> return a
>> set of String (ie, names) but no `Topic` objects. This raises one
>>> more
>> thought: we might want to rename `Processor#stores()` to
>> `Processor#storeNames()` as well ass `Sink#topic()` to
>> `Sink#topicName()`, too. This would keep the naming in the API
> consistent.
>>

Build failed in Jenkins: kafka-trunk-jdk8 #2824

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:   5% (1/17)   remote: Compressing objects: 
 11% (2/17)   remote: Compressing objects:  17% (3/17)   
remote: Compressing objects:  23% (4/17)   remote: Compressing objects: 
 29% (5/17)   remote: Compressing objects:  35% (6/17)   
remote: Compressing objects:  41% (7/17)   remote: Compressing objects: 
 47% (8/17)   remote: Compressing objects:  52% (9/17)   
remote: Compressing objects:  58% (10/17)   remote: Compressing 
objects:  64% (11/17)   remote: Compressing objects:  70% (12/17)   
remote: Compressing objects:  76% (13/17)   remote: Compressing 
objects:  82% (14/17)   remote: Compressing objects:  88% (15/17)   
remote: Compressing objects:  94% (16/17)   remote: Compressing 
objects: 100% (17/17)   remote: Compressing objects: 100% (17/17), 
done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receivin

Jenkins build is back to normal : kafka-trunk-jdk10 #308

2018-07-22 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #2823

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:   5% (1/17)   remote: Compressing objects: 
 11% (2/17)   remote: Compressing objects:  17% (3/17)   
remote: Compressing objects:  23% (4/17)   remote: Compressing objects: 
 29% (5/17)   remote: Compressing objects:  35% (6/17)   
remote: Compressing objects:  41% (7/17)   remote: Compressing objects: 
 47% (8/17)   remote: Compressing objects:  52% (9/17)   
remote: Compressing objects:  58% (10/17)   remote: Compressing 
objects:  64% (11/17)   remote: Compressing objects:  70% (12/17)   
remote: Compressing objects:  76% (13/17)   remote: Compressing 
objects:  82% (14/17)   remote: Compressing objects:  88% (15/17)   
remote: Compressing objects:  94% (16/17)   remote: Compressing 
objects: 100% (17/17)   remote: Compressing objects: 100% (17/17), 
done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receivin

Build failed in Jenkins: kafka-2.0-jdk8 #85

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H34 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
error: missing object referenced by 'refs/tags/1.1.1-rc0'
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 737bf43bb4e78d2d7a0ee53c27527b479972ebf8
error: Could not read 1ed1daefbc2d72e9b501b94d8c99e874b89f1137
remote: Counting objects: 5853, done.
remote: Compressing objects:   2% (1/50)   remote: Compressing objects: 
  4% (2/50)   remote: Compressing objects:   6% (3/50)   
remote: Compressing objects:   8% (4/50)   remote: Compressing objects: 
 10% (5/50)   remote: Compressing objects:  12% (6/50)   
remote: Compressing objects:  14% (7/50)   remote: Compressing objects: 
 16% (8/50)   remote: Compressing objects:  18% (9/50)   
remote: Compressing objects:  20% (10/50)   remote: Compressing 
objects:  22% (11/50)   remote: Compressing objects:  24% (12/50)   
remote: Compressing objects:  26% (13/50)   remote: Compressing 
objects:  28% (14/50)   remote: Compressing objects:  30% (15/50)   
remote: Compressing objects:  32% (16/50)   remote: Compressing 
objects:  34% (17/50)   remote: Compressing objects:  36% (18/50)   
remote: Compressing objects:  38% (19/50)   remote: Compressing 
objects:  40% (20/50)   remote: Compressing objects:  42% (21/50)   
remote: Compressing objects:  44% (22/50)   remote: Compressing 
objects:  46% (23/50)   remote: Compressing objects:  48% (24/50)   
remote: Compressing objects:  50% (25/50)   remote: Compressing 
objects:  52% (26/50)   remote: Compressing objects:  54% (27/50)   
remote: Compressing objects:  56% (28/50)   remote: Compressing 
objects:  58% (29/50)   remote: Compressing objects:  60% (30/50)   
remote: Compressing objects:  62% (31/50)   remote: Compressing 
objects:  64% (32/50)   remote: Compressing objects:  66% (33/50)   
remote: Compressing objects:  68% (34/50)   remote: Compressing 
objects:  70% (35/50)   remote: Compressing objects:  72% (36/50)   
remote: Compressing objects:  74% (37/50)   remote: Compressing 
objects:  76% (38/50)   remote: Compressing objects:  78% (39/50)   
remote: Compressing objects:  80% (40/50)   remote: Compressing 
objects:  82% (41/50)   remote: Compressing objects:  84% (42/50)   
remote: Compressing objects:  86% (43/50)   remote: Compressing 
objects:  88% (44/50)   remote: Compressing objects:  90% (45/50)   
remote: Compressing objects:  92% (46/50)   remote: Compressing 
objects:  94% (47/50)   remote: Compressing objects:  96% (48/50)   
remote: Compressing objects:  98% (49/50)   remote: Compressing 
objects: 100% (50/50)   remote: Compressing objects: 100% (50/50), 
done.
Receiving objects:   0% (1/5853)  

Build failed in Jenkins: kafka-trunk-jdk8 #2822

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:   5% (1/17)   remote: Compressing objects: 
 11% (2/17)   remote: Compressing objects:  17% (3/17)   
remote: Compressing objects:  23% (4/17)   remote: Compressing objects: 
 29% (5/17)   remote: Compressing objects:  35% (6/17)   
remote: Compressing objects:  41% (7/17)   remote: Compressing objects: 
 47% (8/17)   remote: Compressing objects:  52% (9/17)   
remote: Compressing objects:  58% (10/17)   remote: Compressing 
objects:  64% (11/17)   remote: Compressing objects:  70% (12/17)   
remote: Compressing objects:  76% (13/17)   remote: Compressing 
objects:  82% (14/17)   remote: Compressing objects:  88% (15/17)   
remote: Compressing objects:  94% (16/17)   remote: Compressing 
objects: 100% (17/17)   remote: Compressing objects: 100% (17/17), 
done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Receiving objects:  47% (2697/5738)   
Receiving objects:  48% (2755/5738)   Receiving objects:  49% (2812/5738)   
Receiving objects:  50% (2869/5738)   Receivin

Build failed in Jenkins: kafka-2.0-jdk8 #84

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H34 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
error: missing object referenced by 'refs/tags/1.1.1-rc0'
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 737bf43bb4e78d2d7a0ee53c27527b479972ebf8
error: Could not read 1ed1daefbc2d72e9b501b94d8c99e874b89f1137
remote: Counting objects: 5853, done.
remote: Compressing objects:   1% (1/51)   remote: Compressing objects: 
  3% (2/51)   remote: Compressing objects:   5% (3/51)   
remote: Compressing objects:   7% (4/51)   remote: Compressing objects: 
  9% (5/51)   remote: Compressing objects:  11% (6/51)   
remote: Compressing objects:  13% (7/51)   remote: Compressing objects: 
 15% (8/51)   remote: Compressing objects:  17% (9/51)   
remote: Compressing objects:  19% (10/51)   remote: Compressing 
objects:  21% (11/51)   remote: Compressing objects:  23% (12/51)   
remote: Compressing objects:  25% (13/51)   remote: Compressing 
objects:  27% (14/51)   remote: Compressing objects:  29% (15/51)   
remote: Compressing objects:  31% (16/51)   remote: Compressing 
objects:  33% (17/51)   remote: Compressing objects:  35% (18/51)   
remote: Compressing objects:  37% (19/51)   remote: Compressing 
objects:  39% (20/51)   remote: Compressing objects:  41% (21/51)   
remote: Compressing objects:  43% (22/51)   remote: Compressing 
objects:  45% (23/51)   remote: Compressing objects:  47% (24/51)   
remote: Compressing objects:  49% (25/51)   remote: Compressing 
objects:  50% (26/51)   remote: Compressing objects:  52% (27/51)   
remote: Compressing objects:  54% (28/51)   remote: Compressing 
objects:  56% (29/51)   remote: Compressing objects:  58% (30/51)   
remote: Compressing objects:  60% (31/51)   remote: Compressing 
objects:  62% (32/51)   remote: Compressing objects:  64% (33/51)   
remote: Compressing objects:  66% (34/51)   remote: Compressing 
objects:  68% (35/51)   remote: Compressing objects:  70% (36/51)   
remote: Compressing objects:  72% (37/51)   remote: Compressing 
objects:  74% (38/51)   remote: Compressing objects:  76% (39/51)   
remote: Compressing objects:  78% (40/51)   remote: Compressing 
objects:  80% (41/51)   remote: Compressing objects:  82% (42/51)   
remote: Compressing objects:  84% (43/51)   remote: Compressing 
objects:  86% (44/51)   remote: Compressing objects:  88% (45/51)   
remote: Compressing objects:  90% (46/51)   remote: Compressing 
objects:  92% (47/51)   remote: Compressing objects:  94% (48/51)   
remote: Compressing objects:  96% (49/51)   remote: Compressing 
objects:  98% (50/51)   remote: Compressing objects: 100% (51/51)   
remote: Compressing objects: 100% (51/5

Re: Discussion: New components in JIRA?

2018-07-22 Thread Guozhang Wang
Hello Ray,

Thanks for brining this up. I'm generally +1 on the first two, while for
the last category, personally I felt leaving them as part of `tools` is
fine, but I'm also open for other opinions.

A more general question though, is that today we do not have any guidelines
to ask JIRA reporters to set the right component, i.e. it is purely
best-effort, and we cannot disallow reporters to add any new component
names. And so far the project does not really have a tradition to manage
JIRA reports per-component, as the goal is to not "separate" the project
into silos but recommending everyone to get hands on every aspect of the
project.


Guozhang


On Fri, Jul 20, 2018 at 2:44 PM, Ray Chiang  wrote:

> I've been doing a little bit of component cleanup in JIRA.  What do people
> think of adding
> one or more of the following components?
>
> - logging: For any consumer/producer/broker logging (i.e. log4j). This
> should help disambiguate from the "log" component (i.e. Kafka messages).
>
> - mirrormaker: There are enough requests specific to MirrorMaker that it
> could be put into its own component.
>
> - scripts: I'm a little more ambivalent about this one, but any of the
> bin/*.sh script fixes could belong in their own category.  I'm not sure if
> other people feel strongly for how the "tools" component should be used
> w.r.t. the run scripts.
>
> Any thoughts?
>
> -Ray
>
>


-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk8 #2821

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:   5% (1/20)   remote: Compressing objects: 
 10% (2/20)   remote: Compressing objects:  15% (3/20)   
remote: Compressing objects:  20% (4/20)   remote: Compressing objects: 
 25% (5/20)   remote: Compressing objects:  30% (6/20)   
remote: Compressing objects:  35% (7/20)   remote: Compressing objects: 
 40% (8/20)   remote: Compressing objects:  45% (9/20)   
remote: Compressing objects:  50% (10/20)   remote: Compressing 
objects:  55% (11/20)   remote: Compressing objects:  60% (12/20)   
remote: Compressing objects:  65% (13/20)   remote: Compressing 
objects:  70% (14/20)   remote: Compressing objects:  75% (15/20)   
remote: Compressing objects:  80% (16/20)   remote: Compressing 
objects:  85% (17/20)   remote: Compressing objects:  90% (18/20)   
remote: Compressing objects:  95% (19/20)   remote: Compressing 
objects: 100% (20/20)   remote: Compressing objects: 100% (20/20), 
done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Rece

Build failed in Jenkins: kafka-trunk-jdk10 #307

2018-07-22 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0f3affc0f40751dc8fd064b36b6e859728f63e37
error: Could not read 95fbb2e03f4fe79737c71632e0ef2dfdcfb85a69
error: Could not read 08c465028d057ac23cdfe6d57641fe40240359dd
remote: Counting objects: 5738, done.
remote: Compressing objects:   5% (1/20)   remote: Compressing objects: 
 10% (2/20)   remote: Compressing objects:  15% (3/20)   
remote: Compressing objects:  20% (4/20)   remote: Compressing objects: 
 25% (5/20)   remote: Compressing objects:  30% (6/20)   
remote: Compressing objects:  35% (7/20)   remote: Compressing objects: 
 40% (8/20)   remote: Compressing objects:  45% (9/20)   
remote: Compressing objects:  50% (10/20)   remote: Compressing 
objects:  55% (11/20)   remote: Compressing objects:  60% (12/20)   
remote: Compressing objects:  65% (13/20)   remote: Compressing 
objects:  70% (14/20)   remote: Compressing objects:  75% (15/20)   
remote: Compressing objects:  80% (16/20)   remote: Compressing 
objects:  85% (17/20)   remote: Compressing objects:  90% (18/20)   
remote: Compressing objects:  95% (19/20)   remote: Compressing 
objects: 100% (20/20)   remote: Compressing objects: 100% (20/20), 
done.
Receiving objects:   0% (1/5738)   Receiving objects:   1% (58/5738)   
Receiving objects:   2% (115/5738)   Receiving objects:   3% (173/5738)   
Receiving objects:   4% (230/5738)   Receiving objects:   5% (287/5738)   
Receiving objects:   6% (345/5738)   Receiving objects:   7% (402/5738)   
Receiving objects:   8% (460/5738)   Receiving objects:   9% (517/5738)   
Receiving objects:  10% (574/5738)   Receiving objects:  11% (632/5738)   
Receiving objects:  12% (689/5738)   Receiving objects:  13% (746/5738)   
Receiving objects:  14% (804/5738)   Receiving objects:  15% (861/5738)   
Receiving objects:  16% (919/5738)   Receiving objects:  17% (976/5738)   
Receiving objects:  18% (1033/5738)   Receiving objects:  19% (1091/5738)   
Receiving objects:  20% (1148/5738)   Receiving objects:  21% (1205/5738)   
Receiving objects:  22% (1263/5738)   Receiving objects:  23% (1320/5738)   
Receiving objects:  24% (1378/5738)   Receiving objects:  25% (1435/5738)   
Receiving objects:  26% (1492/5738)   Receiving objects:  27% (1550/5738)   
Receiving objects:  28% (1607/5738)   Receiving objects:  29% (1665/5738)   
Receiving objects:  30% (1722/5738)   Receiving objects:  31% (1779/5738)   
Receiving objects:  32% (1837/5738)   Receiving objects:  33% (1894/5738)   
Receiving objects:  34% (1951/5738)   Receiving objects:  35% (2009/5738)   
Receiving objects:  36% (2066/5738)   Receiving objects:  37% (2124/5738)   
Receiving objects:  38% (2181/5738)   Receiving objects:  39% (2238/5738)   
Receiving objects:  40% (2296/5738)   Receiving objects:  41% (2353/5738)   
Receiving objects:  42% (2410/5738)   Receiving objects:  43% (2468/5738)   
Receiving objects:  44% (2525/5738)   Receiving objects:  45% (2583/5738)   
Receiving objects:  46% (2640/5738)   Rec

Re: [Discuss] KIP-321: Add method to get TopicNameExtractor in TopologyDescription

2018-07-22 Thread Guozhang Wang
I think I can be convinced with deprecating topics() to keep API minimal.

About renaming the others with `XXNames()`: well, to me it feels still not
very worthy since although it is not a big burden, it seems also not a big
"return" if we name the newly added function `topicSet()`.


Guozhang


On Fri, Jul 20, 2018 at 7:38 PM, Nishanth Pradeep 
wrote:

> I definitely agree with you on deprecating topics().
>
> I also think changing the method names for consistency is reasonable, since
> there is no functionality change. Although, I can be convinced either way
> on this one.
>
> Best,
> Nishanth Pradeep
> On Fri, Jul 20, 2018 at 12:15 PM Matthias J. Sax 
> wrote:
>
> > I would still deprecate existing `topics()` method. If users need a
> > String, they can call `topicSet().toString()`.
> >
> > It's just a personal preference, because I believe it's good to keep the
> > API "minimal".
> >
> > About renaming the other methods: I thinks it's a very small burden to
> > deprecate the existing methods and add them with new names. Also just my
> > 2 cents.
> >
> > Would be good to see what others think.
> >
> >
> > -Matthias
> >
> > On 7/19/18 6:20 PM, Nishanth Pradeep wrote:
> > > Understood, Guozhang.
> > >
> > > Thanks for the help, everyone! I have updated the KIP. Let me know if
> you
> > > any other thoughts or suggestions.
> > >
> > > Best,
> > > Nishanth Pradeep
> > >
> > > On Thu, Jul 19, 2018 at 7:33 PM Guozhang Wang 
> > wrote:
> > >
> > >> I see.
> > >>
> > >> Well, I think if we add a new function like topicSet() it is less
> > needed to
> > >> deprecate topics() as it returns "{topic1, topic2, ..}" which is sort
> of
> > >> non-overlapping in usage with the new API.
> > >>
> > >>
> > >> Guozhang
> > >>
> > >> On Thu, Jul 19, 2018 at 5:31 PM, Nishanth Pradeep <
> > nishanth...@gmail.com>
> > >> wrote:
> > >>
> > >>> That is what I meant. I will add topicSet() instead of changing the
> > >>> signature of topics() for compatibility reasons. But should we not
> add
> > a
> > >>> @deprecated flag for topics() or do you want to keep it around for
> the
> > >> long
> > >>> run?
> > >>>
> > >>> On Thu, Jul 19, 2018 at 7:27 PM Guozhang Wang 
> > >> wrote:
> > >>>
> >  We cannot change the signature of the function named "topics" from
> > >>> "String"
> >  to "Set", as Matthias mentioned it is a compatibility
> breaking
> >  change.
> > 
> >  That's why I was proposing add a new function like "Set
> >  topicSet()", while keeping "String topics()" as is.
> > 
> >  Guozhang
> > 
> >  On Thu, Jul 19, 2018 at 5:22 PM, Nishanth Pradeep <
> > >> nishanth...@gmail.com
> > 
> >  wrote:
> > 
> > > Right, adding topicNames() instead of changing the return type of
> >  topics()
> > > in order preserve backwards compatibility is a good idea. But is it
> > >> not
> > > better to depreciate topics() because it would be redundant? In our
> > >>> case,
> > > it would only be calling topicNames/topicSet#toString().
> > >
> > > I still agree that perhaps changing the other API's might be
> > >>> unnecessary
> > > since it's only a name change.
> > >
> > > I have made the change to the KIP to only add, not change,
> > >> preexisting
> > > APIs. But where do we stand on deprecating topics()?
> > >
> > > Best,
> > > Nishanth Pradeep
> > >
> > > On Thu, Jul 19, 2018 at 1:44 PM Guozhang Wang 
> >  wrote:
> > >
> > >> Personally I'd prefer to keep the deprecation-related changes as
> > >>> small
> >  as
> > >> possible unless they are really necessary, and hence I'd prefer to
> > >>> just
> > > add
> > >>
> > >> List topicList()  /* or Set topicSet() */
> > >>
> > >> in addition to topicPattern to Source, in addition to
> > > `topicNameExtractor`
> > >> to Sink, and leaving the current APIs as-is.
> > >>
> > >> Guozhang
> > >>
> > >> On Thu, Jul 19, 2018 at 10:36 AM, Matthias J. Sax <
> >  matth...@confluent.io
> > >>
> > >> wrote:
> > >>
> > >>> Thanks for updating the KIP.
> > >>>
> > >>> The current `Source` interface has a method `String topics()`
> > >> atm.
> > > Thus,
> > >>> we cannot just add `Set Source#topics()` because this
> > >> would
> > >>> replace the existing method and would be an incompatible change.
> > >>>
> > >>> I think, we should deprecate `String topics()` and add a method
> > >>> with
> > >>> different name:
> > >>>
> > >>> `Set Source#topicNames()`
> > >>>
> > >>> The method name `topicNames` is more appropriate anyway, as we
> >  return a
> > >>> set of String (ie, names) but no `Topic` objects. This raises one
> >  more
> > >>> thought: we might want to rename `Processor#stores()` to
> > >>> `Processor#storeNames()` as well ass `Sink#topic()` to
> > >>> `Sink#topicName()`, too. This would keep the naming in the API
> > >> consistent

Re: [DISCUSS] KIP-342 Add Customizable SASL extensions to OAuthBearer authentication

2018-07-22 Thread Ron Dagostino
Hi Rajini.  The SaslServer is going to have to validate the extensions, too, 
but I’m okay with keeping the validation logic elsewhere as long as it can be 
reused in both the client and the secret.

I strongly prefer exposing a map() method as opposed to extensionNames() and 
extensionValue(String) methods. It is a smaller API (2 methods instead of 1), 
and it gives clients of the API full map-related functionality (there’s a lot 
of support for dealing with maps in a variety of ways).

Regardless of whether we go with a map() method or extensionNames() and 
extensionValue(String) methods, the semantics of mutability need to be clear.  
I think either way we should never share a map that anyone else could possibly 
mutate — either a map that someone gives us or a map that we might expose.

Thoughts?

Ron

> On Jul 22, 2018, at 11:23 AM, Rajini Sivaram  wrote:
> 
> Hmm I think we need a much simpler SaslExtensions class if we are
> making it part of the public API.
> 
> 1. I don't see the point of including separator anywhere in SaslExtensions.
> Extensions provide a map and we propagate the map from client to server
> using the protocol associated with the mechanism in use. The separator is
> not configurable and should not be a concern of the implementor of
> SaslExtensionsCallback interface that provides an instance of SaslExtensions
> .
> 
> 2. I agree with Ron that we need mechanism-specific validation of the
> values from SaslExtensions. But I think we could do the validation in the
> appropriate `SaslClient` implementation of that mechanism.
> 
> I think we could just have a very simple extensions class and move
> everything else to appropriate internal classes of the mechanisms using
> extensions. What do you think?
> 
> public class SaslExtensions {
>private final Map extensionMap;
> 
>public SaslExtensions(Map extensionMap) {
>this.extensionMap = extensionMap;
>}
> 
>public String extensionValue(String name) {
>return extensionMap.get(name);
>}
> 
>public Set extensionNames() {
>return extensionMap.keySet();
>}
> }
> 
> 
> 
>> On Sat, Jul 21, 2018 at 9:01 PM, Ron Dagostino  wrote:
>> 
>> Hi Stanislav and Rajini.  If SaslExtensions is going to part of the public
>> API, then it occurred to me that one of the requirements of all SASL
>> extensions is that the keys and values need to match mechanism-specific
>> regular expressions.  For example, RFC 5802 (
>> https://tools.ietf.org/html/rfc5802) specifies the regular expressions for
>> the SCRAM-specific SASL mechanisms, and RFC 7628 (
>> https://tools.ietf.org/html/rfc7628) specifies different regular
>> expressions for the OAUTHBEARER SASL mechanism.  I am thinking the
>> SaslExtensions class should probably provide a way to make sure the keys
>> and values match the appropriate regular expressions.  What do you think of
>> something along the lines of the below definition for the SaslExtensions
>> class?  It is missing Javadoc and toString()/hashCode()/equals() methods,
>> of course, but aside from that, do you think this is sufficient and
>> appropriate?
>> 
>> Ron
>> 
>> public class SaslExtensions {
>>private final Map extensionsMap;
>> 
>>public SaslExtensions(String mapStr, String keyValueSeparator, String
>> elementSeparator,
>>Pattern saslNameRegexPattern, Pattern saslValueRegexPattern) {
>>this(Utils.parseMap(mapStr, keyValueSeparator, elementSeparator),
>> saslNameRegexPattern, saslValueRegexPattern);
>>}
>> 
>>public SaslExtensions(Map extensionsMap, Pattern
>> saslNameRegexPattern,
>>Pattern saslValueRegexPattern) {
>>Map sanitizedCopy = new
>> HashMap<>(extensionsMap.size());
>>for (Entry entry : extensionsMap.entrySet()) {
>>if (!saslNameRegexPattern.matcher(entry.getKey()).matches()
>>||
>> !saslValueRegexPattern.matcher(entry.getValue()).matches())
>>throw new IllegalArgumentException("Invalid key or
>> value");
>>sanitizedCopy.put(entry.getKey(), entry.getValue());
>>}
>>this.extensionsMap = Collections.unmodifiableMap(sanitizedCopy);
>>}
>> 
>>public Map map() {
>>return extensionsMap;
>>}
>> }
>> 
>> On Fri, Jul 20, 2018 at 12:49 PM Stanislav Kozlovski <
>> stanis...@confluent.io>
>> wrote:
>> 
>>> Hi Ron,
>>> 
>>> I saw that and decided that would be the best approach. The current
>>> ScramExtensions implementation uses a Map in the public credentials and I
>>> thought I would follow convention rather than introduce my own thing, but
>>> maybe this is best
>>> 
 On Fri, Jul 20, 2018 at 8:39 AM Ron Dagostino  wrote:
 
 Hi Stanislav.  I'm wondering if we should make SaslExtensions part of
>> the
 public API.  I mentioned this in my review of the PR, too (and tagged
 Rajini to get her input).  If we add a Map to the Subject's public
 credentials we are basically making a public commitment that a

Re: [DISCUSS] KIP-342 Add Customizable SASL extensions to OAuthBearer authentication

2018-07-22 Thread Rajini Sivaram
Hmm I think we need a much simpler SaslExtensions class if we are
making it part of the public API.

1. I don't see the point of including separator anywhere in SaslExtensions.
Extensions provide a map and we propagate the map from client to server
using the protocol associated with the mechanism in use. The separator is
not configurable and should not be a concern of the implementor of
SaslExtensionsCallback interface that provides an instance of SaslExtensions
.

2. I agree with Ron that we need mechanism-specific validation of the
values from SaslExtensions. But I think we could do the validation in the
appropriate `SaslClient` implementation of that mechanism.

I think we could just have a very simple extensions class and move
everything else to appropriate internal classes of the mechanisms using
extensions. What do you think?

public class SaslExtensions {
private final Map extensionMap;

public SaslExtensions(Map extensionMap) {
this.extensionMap = extensionMap;
}

public String extensionValue(String name) {
return extensionMap.get(name);
}

public Set extensionNames() {
return extensionMap.keySet();
}
}



On Sat, Jul 21, 2018 at 9:01 PM, Ron Dagostino  wrote:

> Hi Stanislav and Rajini.  If SaslExtensions is going to part of the public
> API, then it occurred to me that one of the requirements of all SASL
> extensions is that the keys and values need to match mechanism-specific
> regular expressions.  For example, RFC 5802 (
> https://tools.ietf.org/html/rfc5802) specifies the regular expressions for
> the SCRAM-specific SASL mechanisms, and RFC 7628 (
> https://tools.ietf.org/html/rfc7628) specifies different regular
> expressions for the OAUTHBEARER SASL mechanism.  I am thinking the
> SaslExtensions class should probably provide a way to make sure the keys
> and values match the appropriate regular expressions.  What do you think of
> something along the lines of the below definition for the SaslExtensions
> class?  It is missing Javadoc and toString()/hashCode()/equals() methods,
> of course, but aside from that, do you think this is sufficient and
> appropriate?
>
> Ron
>
> public class SaslExtensions {
> private final Map extensionsMap;
>
> public SaslExtensions(String mapStr, String keyValueSeparator, String
> elementSeparator,
> Pattern saslNameRegexPattern, Pattern saslValueRegexPattern) {
> this(Utils.parseMap(mapStr, keyValueSeparator, elementSeparator),
> saslNameRegexPattern, saslValueRegexPattern);
> }
>
> public SaslExtensions(Map extensionsMap, Pattern
> saslNameRegexPattern,
> Pattern saslValueRegexPattern) {
> Map sanitizedCopy = new
> HashMap<>(extensionsMap.size());
> for (Entry entry : extensionsMap.entrySet()) {
> if (!saslNameRegexPattern.matcher(entry.getKey()).matches()
> ||
> !saslValueRegexPattern.matcher(entry.getValue()).matches())
> throw new IllegalArgumentException("Invalid key or
> value");
> sanitizedCopy.put(entry.getKey(), entry.getValue());
> }
> this.extensionsMap = Collections.unmodifiableMap(sanitizedCopy);
> }
>
> public Map map() {
> return extensionsMap;
> }
> }
>
> On Fri, Jul 20, 2018 at 12:49 PM Stanislav Kozlovski <
> stanis...@confluent.io>
> wrote:
>
> > Hi Ron,
> >
> > I saw that and decided that would be the best approach. The current
> > ScramExtensions implementation uses a Map in the public credentials and I
> > thought I would follow convention rather than introduce my own thing, but
> > maybe this is best
> >
> > On Fri, Jul 20, 2018 at 8:39 AM Ron Dagostino  wrote:
> >
> > > Hi Stanislav.  I'm wondering if we should make SaslExtensions part of
> the
> > > public API.  I mentioned this in my review of the PR, too (and tagged
> > > Rajini to get her input).  If we add a Map to the Subject's public
> > > credentials we are basically making a public commitment that any Map
> > > associated with the public credentials defines the SASL extensions and
> we
> > > can never add another instance implementing Map to the public
> > credentials.
> > > That's a very big constraint we are committing to, and I'm wondering if
> > we
> > > should make SaslExtensions public and attach an instance of that to the
> > > Subject's public credentials instead.
> > >
> > > Ron
> > >
> > > On Thu, Jul 19, 2018 at 8:15 PM Stanislav Kozlovski <
> > > stanis...@confluent.io>
> > > wrote:
> > >
> > > > I have updated the PR and KIP to address the comments made so far.
> > Please
> > > > take another look at them and share your thoughts.
> > > > KIP:
> > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 342%3A+Add+support+for+Custom+SASL+extensions+in+
> OAuthBearer+authentication
> > > > PR: Pull request 
> > > >
> > > > Best,
> > > > Stanislav
> > > >
> > > > On Thu, Jul 19, 2018 at 1:58 

Re: [DISCUSS] KIP-291: Have separate queues for control requests and data requests

2018-07-22 Thread Becket Qin
Hi Jun,

The usage of correlation ID might still be useful to address the cases that the 
controller epoch and leader epoch check are not sufficient to guarantee correct 
behavior. For example, if the controller sends a LeaderAndIsrRequest followed 
by a StopReplicaRequest, and the broker processes it in the reverse order, the 
replica may still be wrongly recreated, right?

Thanks,

Jiangjie (Becket) Qin

> On Jul 22, 2018, at 11:47 AM, Jun Rao  wrote:
> 
> Hmm, since we already use controller epoch and leader epoch for properly
> caching the latest partition state, do we really need correlation id for
> ordering the controller requests?
> 
> Thanks,
> 
> Jun
> 
> On Fri, Jul 20, 2018 at 2:18 PM, Becket Qin  wrote:
> 
>> Lucas and Mayuresh,
>> 
>> Good idea. The correlation id should work.
>> 
>> In the ControllerChannelManager, a request will be resent until a response
>> is received. So if the controller to broker connection disconnects after
>> controller sends R1_a, but before the response of R1_a is received, a
>> disconnection may cause the controller to resend R1_b. i.e. until R1 is
>> acked, R2 won't be sent by the controller.
>> This gives two guarantees:
>> 1. Correlation id wise: R1_a < R1_b < R2.
>> 2. On the broker side, when R2 is seen, R1 must have been processed at
>> least once.
>> 
>> So on the broker side, with a single thread controller request handler, the
>> logic should be:
>> 1. Process what ever request seen in the controller request queue
>> 2. For the given epoch, drop request if its correlation id is smaller than
>> that of the last processed request.
>> 
>> Thanks,
>> 
>> Jiangjie (Becket) Qin
>> 
>> On Fri, Jul 20, 2018 at 8:07 AM, Jun Rao  wrote:
>> 
>>> I agree that there is no strong ordering when there are more than one
>>> socket connections. Currently, we rely on controllerEpoch and leaderEpoch
>>> to ensure that the receiving broker picks up the latest state for each
>>> partition.
>>> 
>>> One potential issue with the dequeue approach is that if the queue is
>> full,
>>> there is no guarantee that the controller requests will be enqueued
>>> quickly.
>>> 
>>> Thanks,
>>> 
>>> Jun
>>> 
>>> On Fri, Jul 20, 2018 at 5:25 AM, Mayuresh Gharat <
>>> gharatmayures...@gmail.com
 wrote:
>>> 
 Yea, the correlationId is only set to 0 in the NetworkClient
>> constructor.
 Since we reuse the same NetworkClient between Controller and the
>> broker,
>>> a
 disconnection should not cause it to reset to 0, in which case it can
>> be
 used to reject obsolete requests.
 
 Thanks,
 
 Mayuresh
 
 On Thu, Jul 19, 2018 at 1:52 PM Lucas Wang 
>>> wrote:
 
> @Dong,
> Great example and explanation, thanks!
> 
> @All
> Regarding the example given by Dong, it seems even if we use a queue,
 and a
> dedicated controller request handling thread,
> the same result can still happen because R1_a will be sent on one
> connection, and R1_b & R2 will be sent on a different connection,
> and there is no ordering between different connections on the broker
 side.
> I was discussing with Mayuresh offline, and it seems correlation id
 within
> the same NetworkClient object is monotonically increasing and never
 reset,
> hence a broker can leverage that to properly reject obsolete
>> requests.
> Thoughts?
> 
> Thanks,
> Lucas
> 
> On Thu, Jul 19, 2018 at 12:11 PM, Mayuresh Gharat <
> gharatmayures...@gmail.com> wrote:
> 
>> Actually nvm, correlationId is reset in case of connection loss, I
 think.
>> 
>> Thanks,
>> 
>> Mayuresh
>> 
>> On Thu, Jul 19, 2018 at 11:11 AM Mayuresh Gharat <
>> gharatmayures...@gmail.com>
>> wrote:
>> 
>>> I agree with Dong that out-of-order processing can happen with
 having 2
>>> separate queues as well and it can even happen today.
>>> Can we use the correlationId in the request from the controller
>> to
 the
>>> broker to handle ordering ?
>>> 
>>> Thanks,
>>> 
>>> Mayuresh
>>> 
>>> 
>>> On Thu, Jul 19, 2018 at 6:41 AM Becket Qin >> 
> wrote:
>>> 
 Good point, Joel. I agree that a dedicated controller request
 handling
 thread would be a better isolation. It also solves the
>> reordering
> issue.
 
 On Thu, Jul 19, 2018 at 2:23 PM, Joel Koshy <
>> jjkosh...@gmail.com>
>> wrote:
 
> Good example. I think this scenario can occur in the current
>>> code
 as
 well
> but with even lower probability given that there are other
 non-controller
> requests interleaved. It is still sketchy though and I think a
 safer
> approach would be separate queues and pinning controller
>> request
 handling
> to one handler thread.
> 
> On Wed, Jul 18, 2018 at 11:12 PM, Dong Lin <
>> lindon...@gmail.com
 
>> wrote