Re: Experimenting with Kafka and OpenSSL

2017-11-10 Thread Jaikiran Pai
I ran these same tests with Java 9 runtime today and have updated the 
blog to include these numbers[1]. I'm pasting the summary for Java 9 here:


- Both for producer and consumer, there's a *drastic improvement in the 
JRE shipped SSLEngine numbers, in almost all metrics, in Java 9 as 
compared to its counterpart in Java 8*. It's especially prominent in 
messages with higher sizes.


- There's not much difference in the numbers for WildFly OpenSSL, in 
Java 9, as compared to its Java 8 counterpart. In fact, the consumer 
performance numbers of WildFly OpenSSL in Java 9 have dropped slightly 
when compared to Java 8. The producer performance in Java 9 with WildFly 
OpenSSL have however improved slightly when compared to Java 8.


- When the numbers of producer and consumer metrics of WildFly OpenSSL 
with Java 9 runtime are compared with the JRE shipped SSL engine in Java 
9, *WildFly OpenSSL still out-performs the one shipped in JRE*.


Like in the Java 8 runs, all default configs and settings were used, not 
just for Kafka but even the JRE (i.e. no explicit choice of cipher suites).


[1] https://jaitechwriteups.blogspot.com/2017/10/kafka-with-openssl.html

-Jaikiran
On 30/10/17 5:33 PM, Ismael Juma wrote:

If Java 9 is used by both clients and brokers, AES GCM is used by default.
I did a quick test a while back and there was a significant improvement:

https://twitter.com/ijuma/status/905847523897724929

Ismael

On Mon, Oct 30, 2017 at 11:57 AM, Radu Radutiu  wrote:


If you test with Java 9 please make sure to use an accelerated cipher suite
(e.g.  one that uses AES GCM such as TLS_RSA_WITH_AES_128_GCM_SHA256).

Radu

On Mon, Oct 30, 2017 at 1:49 PM, Jaikiran Pai 
wrote:


I haven't yet had a chance to try out Java 9, but that's definitely on my
TODO list, maybe sometime this weekend.

Thanks for pointing me to KAFKA-2561. I had missed that.

-Jaikiran



On 30/10/17 4:17 PM, Mickael Maison wrote:


Thanks for sharing, very interesting read.

Did you get a chance to try JDK 9 ?

We also considered using OpenSSL instead of JSSE especially since
Netty made an easy to re-use package (netty-tcnative).

There was KAFKA-2561
(https://issues.apache.org/jira/browse/KAFKA-2561) where people shared
a few numbers and what would be need to get it working.

On Mon, Oct 30, 2017 at 8:08 AM, Jaikiran Pai 

Re: [VOTE] KIP-159: Introducing Rich functions to Streams

2017-11-10 Thread Jan Filipiak

Hi,

i think this is the better way. Naming is always tricky Source is kinda 
taken

I had TopicBackedK[Source|Table] in mind
but for the user its way better already IMHO

Thank you for reconsideration

Best Jan


On 10.11.2017 22:48, Matthias J. Sax wrote:

I was thinking about the source stream/table idea once more and it seems
it would not be too hard to implement:

We add two new classes

   SourceKStream extends KStream

and

   SourceKTable extend KTable

and return both from StreamsBuilder#stream and StreamsBuilder#table

As both are sub-classes, this change is backward compatible. We change
the return type for any single-record transform to this new types, too,
and use KStream/KTable as return type for any multi-record operation.

The new RecordContext API is added to both new classes. For old classes,
we only implement KIP-149 to get access to the key.


WDYT?


-Matthias

On 11/9/17 9:13 PM, Jan Filipiak wrote:

Okay,

looks like it would _at least work_ for Cached KTableSources .
But we make it harder to the user to make mistakes by putting
features into places where they don't make sense and don't
help anyone.

I once again think that my suggestion is easier to implement and
more correct. I will use this email to express my disagreement with the
proposed KIP (-1 non binding of course) state that I am open for any
questions
regarding this. I will also do the usual thing and point out that the
friends
over at Hive got it correct aswell.
One can not user their
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+VirtualColumns

in any place where its not read from the Sources.

With KSQl in mind it makes me sad how this is evolving here.

Best Jan





On 10.11.2017 01:06, Guozhang Wang wrote:

Hello Jan,

Regarding your question about caching: today we keep the record context
with the cached entry already so when we flush the cache which may
generate
new records forwarding we will set the record context appropriately; and
then after the flush is completed we will reset the context to the record
before the flush happens. But I think when Jeyhun did the PR it is a good
time to double check on such stages to make sure we are not
introducing any
regressions.


Guozhang


On Mon, Nov 6, 2017 at 8:54 PM, Jan Filipiak 
wrote:


I Aggree completely.

Exposing this information in a place where it has no _natural_ belonging
might really be a bad blocker in the long run.

Concerning your first point. I would argue its not to hard to have a
user
keep track of these. If we still don't want the user
to keep track of these I would argue that all > projection only <
transformations on a Source-backed KTable/KStream
could also return a Ktable/KStream instance of the type we return
from the
topology builder.
Only after any operation that exceeds projection or filter one would
return a KTable not granting access to this any longer.

Even then its difficult already: I never ran a topology with caching
but I
am not even 100% sure what the record Context means behind
a materialized KTable with Caching? Topic and Partition are probably
with
some reasoning but offset is probably only the offset causing the flush?
So one might aswell think to drop offsets from this RecordContext.

Best Jan







On 07.11.2017 03:18, Guozhang Wang wrote:


Regarding the API design (the proposed set of overloads v.s. one
overload
on #map to enrich the record), I think what we have represents a good
trade-off between API succinctness and user convenience: on one hand we
definitely want to keep as fewer overloaded functions as possible.
But on
the other hand if we only do that in, say, the #map() function then
this
enrichment could be an overkill: think of a topology that has 7
operators
in a chain, where users want to access the record context on
operator #2
and #6 only, with the "enrichment" manner they need to do the
enrichment
on
operator #2 and keep it that way until #6. In addition, the
RecordContext
fields (topic, offset, etc) are really orthogonal to the key-value
payloads
themselves, so I think separating them into this object is a cleaner
way.

Regarding the RecordContext inheritance, this is actually a good point
that
have not been discussed thoroughly before. Here are my my two cents:
one
natural way would be to inherit the record context from the
"triggering"
record, for example in a join operator, if the record from stream A
triggers the join then the record context is inherited from with that
record. This is also aligned with the lower-level PAPI interface. A
counter
argument, though, would be that this is sort of leaking the internal
implementations of the DSL, so that moving forward if we did some
refactoring to our join implementations so that the triggering
record can
change, the RecordContext would also be different. I do not know how
much
it would really affect end users, but would like to hear your opinions.


Agreed to 100% exposing this information



Guozhang


On Mon, Nov 6, 

Re: [DISCUSS] KIP-224: Add configuration parameters `retries` and `retry.backoff.ms` to Streams API

2017-11-10 Thread Guozhang Wang
For state lock retries, I realized that for both global state stores and
local state stores we are using hard-coded retries today (different values
though: 5 and MAX).

Should we only use retries for global state, and use 0 for local state
store to fallback to the main loop?


Guozhang


On Fri, Nov 10, 2017 at 1:41 PM, Bill Bejeck  wrote:

> Overall I'd agree with re-using the parameter for state lock retries.
>
> Would there ever be a case where you'd need to have them be different
> values?
>
> On Fri, Nov 10, 2017 at 4:17 PM, Ted Yu  wrote:
>
> > bq. it would be worth to reuse both parameters for those
> >
> > I agree.
> >
> > On Fri, Nov 10, 2017 at 1:14 PM, Matthias J. Sax 
> > wrote:
> >
> > > Thanks for the feedback. Typos fixed.
> > >
> > > Damian explained already why we need the new strategy.
> > >
> > > @Kamal: many users don't want to retry but want to fail the Kafka
> Stream
> > > instance in case of an error. All default parameters are chosen to
> > > follow this pattern (similar to consumer/producer/broker defaults). The
> > > KIP aims to allow users to reconfigure Kafka Streams to be resilient
> > > against errors. It's a users choice to change configs to get better
> > > resilience.
> > >
> > >
> > > Update:
> > >
> > > While I was working on the PR, I realized that parameter
> > > "retry.backoff.ms" is already available in StreamsConfig. I updated
> the
> > > KIP accordingly.
> > >
> > > I also discovered, that we have a hard coded number of retries for
> state
> > > locks -- I think, it would be worth to reuse both parameters for those,
> > > too. WDYT?
> > >
> > > Here is the current PR: https://github.com/apache/kafka/pull/4206
> > >
> > >
> > > -Matthias
> > >
> > >
> > >
> > > On 11/9/17 2:29 PM, Guozhang Wang wrote:
> > > > Damian,
> > > >
> > > > You are right! I was dreaming at the wrong class :)
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Thu, Nov 9, 2017 at 11:27 AM, Damian Guy 
> > > wrote:
> > > >
> > > >> Guozhang, i'm not sure i follow... Global stores aren't per task,
> they
> > > are
> > > >> per application instance and should be fully restored before the
> > stream
> > > >> threads start processing. They don't go through a rebalance as it is
> > > manual
> > > >> assignment of all partitions in the topic.
> > > >>
> > > >> On Thu, 9 Nov 2017 at 17:43 Guozhang Wang 
> wrote:
> > > >>
> > > >>> Instead of restoring the global store during registration, could we
> > > also
> > > >> do
> > > >>> this after the rebalance callback as in the main loop? By doing
> this
> > we
> > > >> can
> > > >>> effectively swallow-and-retry-in-next-loop as we did for non-global
> > > >> stores.
> > > >>> Since global stores are per task not per thread, we would not
> process
> > > the
> > > >>> task after the global store is bootstrapped fully.
> > > >>>
> > > >>>
> > > >>> Guozhang
> > > >>>
> > > >>> On Thu, Nov 9, 2017 at 7:08 AM, Bill Bejeck 
> > wrote:
> > > >>>
> > >  Thanks for the KIP Matthias, +1 from me.
> > > 
> > > 
> > >  -Bill
> > > 
> > >  On Thu, Nov 9, 2017 at 8:40 AM, Ted Yu 
> wrote:
> > > 
> > > > lgtm
> > > >
> > > > bq. pass both parameter
> > > >
> > > > parameter should be in plural.
> > > > Same with 'two new configuration parameter'
> > > >
> > > > Cheers
> > > >
> > > > On Thu, Nov 9, 2017 at 4:20 AM, Damian Guy  >
> > > >>> wrote:
> > > >
> > > >> Thanks Matthias, LGTM
> > > >>
> > > >> On Thu, 9 Nov 2017 at 11:13 Matthias J. Sax <
> > matth...@confluent.io
> > > >>>
> > > > wrote:
> > > >>
> > > >>> Hi,
> > > >>>
> > > >>> I want to propose a new KIP to make Streams API more resilient
> to
> > > > broker
> > > >>> disconnections.
> > > >>>
> > > >>>
> > > >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> 224%3A+Add+configuration+parameters+%60retries%60+and+%
> > > >> 60retry.backoff.ms%60+to+Streams+API
> > > >>>
> > > >>>
> > > >>> -Matthias
> > > >>>
> > > >>>
> > > >>
> > > >
> > > 
> > > >>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> -- Guozhang
> > > >>>
> > > >>
> > > >
> > > >
> > > >
> > >
> > >
> >
>



-- 
-- Guozhang


Re: [VOTE] 0.11.0.2 RC0

2017-11-10 Thread Rajini Sivaram
Resending to include kafka-clients.

On Sat, Nov 11, 2017 at 12:37 AM, Rajini Sivaram 
wrote:

> Hello Kafka users, developers and client-developers,
>
>
> This is the first candidate for release of Apache Kafka 0.11.0.2.
>
>
> This is a bug fix release and it includes fixes and improvements from 16 
> JIRAs,
> including a few critical bugs.
>
>
> Release notes for the 0.11.0.2 release:
>
> http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/RELEASE_NOTES.html
>
>
> *** Please download, test and vote by Wednesday the 15th of November, 8PM
> PT
>
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
>
> http://kafka.apache.org/KEYS
>
>
> * Release artifacts to be voted upon (source and binary):
>
> http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/
>
>
> * Maven artifacts to be voted upon:
>
> https://repository.apache.org/content/groups/staging/
>
>
> * Javadoc:
>
> http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/javadoc/
>
>
> * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.2 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> 25639822d6e23803c599cba35ad3dc1a2817b404
>
>
>
> * Documentation:
>
> Note the documentation can't be pushed live due to changes that will not
> go live until the release. You can manually verify by downloading
>
> http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/
> kafka_2.11-0.11.0.2-site-docs.tgz
>
>
>
> * Protocol:
>
> http://kafka.apache.org/0110/protocol.html
>
>
> * Successful Jenkins builds for the 0.11.0 branch:
>
> Unit/integration tests: https://builds.apache.
> org/job/kafka-0.11.0-jdk7/333/
>
>
>
>
> Thanks,
>
>
> Rajini
>
>


[VOTE] 0.11.0.2 RC0

2017-11-10 Thread Rajini Sivaram
Hello Kafka users, developers and client-developers,


This is the first candidate for release of Apache Kafka 0.11.0.2.


This is a bug fix release and it includes fixes and improvements from 16 JIRAs,
including a few critical bugs.


Release notes for the 0.11.0.2 release:

http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/RELEASE_NOTES.html


*** Please download, test and vote by Wednesday the 15th of November, 8PM PT


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/


* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/


* Javadoc:

http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/javadoc/


* Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.2 tag:

https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=25639822d6e23803c599cba35ad3dc1a2817b404



* Documentation:

Note the documentation can't be pushed live due to changes that will
not go live
until the release. You can manually verify by downloading

http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/kafka_2.11-0.11.0.2-site-docs.tgz



* Protocol:

http://kafka.apache.org/0110/protocol.html


* Successful Jenkins builds for the 0.11.0 branch:

Unit/integration tests: https://builds.apache.org/job/kafka-0.11.0-jdk7/333/




Thanks,


Rajini


Re: [DISCUSS] KIP-147: Add missing type parameters to StateStoreSupplier factories and KGroupedStream/Table methods

2017-11-10 Thread Matthias J. Sax
I updated this KIP as discarded.

On 10/13/17 10:34 AM, Matthias J. Sax wrote:
> Hi,
> 
> with KIP-182 being implemented in 1.0 that will be released shortly, I
> wanted to follow up on this KIP. It seems, that we don't need it any
> longer, as KIP-182 resolves those issues.
> 
> If you agree, please update the KIP wiki page accordingly and move the
> KIP to "discarded" table.
> 
> 
> Thanks a lot!
> 
> 
> -Matthias
> 
> On 6/24/17 9:53 AM, Michal Borowiecki wrote:
>> I think the discussion on Streams DSL refactoring will render this KIP
>> obsolete.
>>
>> I'll leave is as under discussion until something is agreed and then
>> move it to discarded.
>>
>> Cheers,
>>
>> Michał
>>
>>
>> On 03/06/17 10:02, Michal Borowiecki wrote:
>>>
>>> I agree maintaining backwards-compatibility here adds a lot of overhead.
>>>
>>> I haven't so far found a way to reconcile these elegantly.
>>>
>>> Whichever way we go it's better to take the pain sooner rather than
>>> later. Kafka 0.11.0.0 (through KAFKA-5045
>>> /KIP-114) increased
>>> the surface affected by the lack of fully type-parametrised suppliers
>>> noticeably.
>>>
>>> Cheers,
>>>
>>> Michał
>>>
>>>
>>> On 03/06/17 09:43, Damian Guy wrote:
 Hmm, i guess this won't work due to adding the additional  to
 the StateStoreSupplier params on reduce, count, aggregate etc.

 On Sat, 3 Jun 2017 at 09:06 Damian Guy > wrote:

 Hi Michal,

 Thanks for the KIP - is there a way we can do this without having
 to introduce the new Typed.. Interfaces, overloaded methods etc?
 Is it possible that we just need to provide a couple of new
 methods on PersistentKeyValueFactory for windowed and
 sessionWindowed to return interfaces like you've introduced in
 TypedStores?
 I admit i haven't looked in much detail if that would work.

 My concern is that this is duplicating a bunch of code and
 increasing the surface area for what is minimal benefit. It is
 one of those cases where i'd love to not have to maintain
 backward compatibility.

 Thanks,
 Damian

 On Fri, 2 Jun 2017 at 08:20 Michal Borowiecki
 > wrote:

 Thanks Matthias,

 I appreciate people are busy now preparing the 0.11 release.

 One thing I would also appreciate input on is perhaps a
 better name for the new TypedStores class, I just picked it
 quickly but don't really like it.

 Perhaps StateStores would make for a better name?

 Cheers,
 Michal


 On 02/06/17 07:18, Matthias J. Sax wrote:
> Thanks for the update Michal.
>
> I did skip over the PR. Looks good to me, as far as I can tell. 
> Maybe
> Damian, Xavier, or Ismael can comment on this. Would be good to 
> get
> confirmation that the change is backward compatible.
>
>
> -Matthias
>
>
> On 5/27/17 11:11 AM, Michal Borowiecki wrote:
>> Hi all,
>>
>> I've updated the KIP to reflect the proposed 
>> backwards-compatible approach:
>>
>> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=69408481
>>
>>
>> Given the vast area of APIs affected, I think the PR is easier 
>> to read
>> than the code excerpts in the KIP itself:
>> https://github.com/apache/kafka/pull/2992/files
>>
>> Thanks,
>> Michał
>>
>> On 07/05/17 10:16, Eno Thereska wrote:
>>> I like this KIP in general and I agree it’s needed. Perhaps 
>>> Damian can comment on the session store issue?
>>>
>>> Thanks
>>> Eno
 On May 6, 2017, at 10:32 PM, Michal Borowiecki 
 
  wrote:

 Hi Matthias,

 Agreed. I tried your proposal and indeed it would work.

 However, I think to maintain full backward compatibility we 
 would also need to deprecate Stores.create() and leave it unchanged, 
 while providing a new method that returns the more strongly typed 
 Factories.

 ( This is because PersistentWindowFactory and 
 PersistentSessionFactory cannot extend the existing 
 PersistentKeyValueFactory interface, since their build() methods will 
 be returning TypedStateStoreSupplier> and 
 TypedStateStoreSupplier> respectively, which are 
 NOT 

Re: [DISCUSS] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-11-10 Thread Matthias J. Sax
I just review the PR, and there is one thing we should discuss.

There are different types of exceptions that could occur. Some apply to
all records (like Authorization exception) while others are for single
records only (like record too large).

For the first category, it seems to not make sense to call the handle
but Streams should always fail. If we follow this design, the KIP should
list all exceptions for which the handler is not called.

WDYT?


-Matthias


On 11/10/17 2:56 PM, Matthias J. Sax wrote:
> Just catching up on this KIP.
> 
> One tiny comment: I would prefer to remove the "Always" from the handler
> implementation names -- it sounds "cleaner" to me without it.
> 
> 
> -Matthias
> 
> On 11/5/17 12:57 PM, Matt Farmer wrote:
>> It is agreed, then. I've updated the pull request. I'm trying to also
>> update the KIP accordingly, but cwiki is being slow and dropping
>> connections. I'll try again a bit later but please consider the KIP
>> updated for all intents and purposes. heh.
>>
>> On Sun, Nov 5, 2017 at 3:45 PM Guozhang Wang  wrote:
>>
>>> That makes sense.
>>>
>>>
>>> Guozhang
>>>
>>> On Sun, Nov 5, 2017 at 12:33 PM, Matt Farmer  wrote:
>>>
 Interesting. I'm not sure I agree. I've been bitten many times by
 unintentionally shipping code that fails to properly implement logging. I
 always discover this at the exact *worst* moment, too. (Normally at 3 AM
 during an on-call shift. Hah.) However, if others feel the same way I
>>> could
 probably be convinced to remove it.

 We could also meet halfway and say that when a customized
 ProductionExceptionHandler instructs Streams to CONTINUE, we log at DEBUG
 level instead of WARN level. Would that alternative be appealing to you?

 On Sun, Nov 5, 2017 at 12:32 PM Guozhang Wang 
>>> wrote:

> Thanks for the updates. I made a pass over the wiki again and it looks
> good.
>
> About whether record collector should still internally log the error in
> addition to what the customized ProductionExceptionHandler does. I
> personally would prefer only to log if the returned value is FAIL to
> indicate that this thread is going to shutdown and trigger the
>>> exception
> handler.
>
>
> Guozhang
>
>
> On Sun, Nov 5, 2017 at 6:09 AM, Matt Farmer  wrote:
>
>> Hello, a bit later than I'd anticipated, but I've updated this KIP as
>> outlined above. The updated KIP is now ready for review again!
>>
>> On Sat, Nov 4, 2017 at 1:03 PM Matt Farmer  wrote:
>>
>>> Ah. I actually created both of those in the PR and forgot to
>>> mention
> them
>>> by name in the KIP! Thanks for pointing out the oversight.
>>>
>>> I’ll revise the KIP this afternoon accordingly.
>>>
>>> The logging is actually provided for in the record collector.
 Whenever
> a
>>> handler continues it’ll log a warning to ensure that it’s
 *impossible*
> to
>>> write a handler that totally suppresses production exceptions from
 the
>> log.
>>> As such, the default continue handler just returns the continue
 value.
> I
>>> can add details about those semantics to the KIP as well.
>>> On Sat, Nov 4, 2017 at 12:46 PM Matthias J. Sax <
 matth...@confluent.io
>>
>>> wrote:
>>>
 One more comment.

 You mention a default implementation for the handler that fails. I
 think, this should be part of the public API and thus should have
>>> a
 proper defined named that is mentioned in the KIP.

 We could also add a second implementation for the log-and-move-on
 strategy, as both are the two most common cases. This class should
> also
 be part of public API (so users can just set in the config) with a
 proper name.


 Otherwise, I like the KIP a lot! Thanks.


 -Matthias


 On 11/1/17 12:23 AM, Matt Farmer wrote:
> Thanks for the heads up. Yes, I think my changes are compatible
 with
 that
> PR, but there will be a merge conflict that happens whenever one
 of
>> the
 PRs
> is merged. Happy to reconcile the changes in my PR if 4148 goes
>>> in
 first. :)
>
> On Tue, Oct 31, 2017 at 6:44 PM Guozhang Wang <
>>> wangg...@gmail.com
>
 wrote:
>
>> That sounds reasonable, thanks Matt.
>>
>> As for the implementation, please note that there is another
> ongoing
>> PR
>> that may touch the same classes that you are working on:
>> https://github.com/apache/kafka/pull/4148
>>
>> So it may help if you can also take a look at that PR and see
>>> if
 it
>> is
>> compatible with your changes.

Re: [DISCUSS] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-11-10 Thread Matthias J. Sax
Just catching up on this KIP.

One tiny comment: I would prefer to remove the "Always" from the handler
implementation names -- it sounds "cleaner" to me without it.


-Matthias

On 11/5/17 12:57 PM, Matt Farmer wrote:
> It is agreed, then. I've updated the pull request. I'm trying to also
> update the KIP accordingly, but cwiki is being slow and dropping
> connections. I'll try again a bit later but please consider the KIP
> updated for all intents and purposes. heh.
> 
> On Sun, Nov 5, 2017 at 3:45 PM Guozhang Wang  wrote:
> 
>> That makes sense.
>>
>>
>> Guozhang
>>
>> On Sun, Nov 5, 2017 at 12:33 PM, Matt Farmer  wrote:
>>
>>> Interesting. I'm not sure I agree. I've been bitten many times by
>>> unintentionally shipping code that fails to properly implement logging. I
>>> always discover this at the exact *worst* moment, too. (Normally at 3 AM
>>> during an on-call shift. Hah.) However, if others feel the same way I
>> could
>>> probably be convinced to remove it.
>>>
>>> We could also meet halfway and say that when a customized
>>> ProductionExceptionHandler instructs Streams to CONTINUE, we log at DEBUG
>>> level instead of WARN level. Would that alternative be appealing to you?
>>>
>>> On Sun, Nov 5, 2017 at 12:32 PM Guozhang Wang 
>> wrote:
>>>
 Thanks for the updates. I made a pass over the wiki again and it looks
 good.

 About whether record collector should still internally log the error in
 addition to what the customized ProductionExceptionHandler does. I
 personally would prefer only to log if the returned value is FAIL to
 indicate that this thread is going to shutdown and trigger the
>> exception
 handler.


 Guozhang


 On Sun, Nov 5, 2017 at 6:09 AM, Matt Farmer  wrote:

> Hello, a bit later than I'd anticipated, but I've updated this KIP as
> outlined above. The updated KIP is now ready for review again!
>
> On Sat, Nov 4, 2017 at 1:03 PM Matt Farmer  wrote:
>
>> Ah. I actually created both of those in the PR and forgot to
>> mention
 them
>> by name in the KIP! Thanks for pointing out the oversight.
>>
>> I’ll revise the KIP this afternoon accordingly.
>>
>> The logging is actually provided for in the record collector.
>>> Whenever
 a
>> handler continues it’ll log a warning to ensure that it’s
>>> *impossible*
 to
>> write a handler that totally suppresses production exceptions from
>>> the
> log.
>> As such, the default continue handler just returns the continue
>>> value.
 I
>> can add details about those semantics to the KIP as well.
>> On Sat, Nov 4, 2017 at 12:46 PM Matthias J. Sax <
>>> matth...@confluent.io
>
>> wrote:
>>
>>> One more comment.
>>>
>>> You mention a default implementation for the handler that fails. I
>>> think, this should be part of the public API and thus should have
>> a
>>> proper defined named that is mentioned in the KIP.
>>>
>>> We could also add a second implementation for the log-and-move-on
>>> strategy, as both are the two most common cases. This class should
 also
>>> be part of public API (so users can just set in the config) with a
>>> proper name.
>>>
>>>
>>> Otherwise, I like the KIP a lot! Thanks.
>>>
>>>
>>> -Matthias
>>>
>>>
>>> On 11/1/17 12:23 AM, Matt Farmer wrote:
 Thanks for the heads up. Yes, I think my changes are compatible
>>> with
>>> that
 PR, but there will be a merge conflict that happens whenever one
>>> of
> the
>>> PRs
 is merged. Happy to reconcile the changes in my PR if 4148 goes
>> in
>>> first. :)

 On Tue, Oct 31, 2017 at 6:44 PM Guozhang Wang <
>> wangg...@gmail.com

>>> wrote:

> That sounds reasonable, thanks Matt.
>
> As for the implementation, please note that there is another
 ongoing
> PR
> that may touch the same classes that you are working on:
> https://github.com/apache/kafka/pull/4148
>
> So it may help if you can also take a look at that PR and see
>> if
>>> it
> is
> compatible with your changes.
>
>
>
> Guozhang
>
>
> On Tue, Oct 31, 2017 at 10:59 AM, Matt Farmer 
 wrote:
>
>> I've opened this pull request to implement the KIP as
>> currently
>>> written:
>> https://github.com/apache/kafka/pull/4165. It still needs
>> some
> tests
>> added,
>> but largely represents the shape I was going for.
>>
>> If there are more points that folks would like to discuss,
>>> please
> let
>>> me
>> know. If I don't hear anything by tomorrow afternoon I'll
>>> probably
>>> start
> a
>> 

Re: [DISCUSS] Kafka 2.0.0 in June 2018

2017-11-10 Thread Onur Karaman
Hey everyone. Regarding the status of KIP-125, just a heads up: I have an
implementation of KIP-125 (KAFKA-4513) here:
https://github.com/onurkaraman/kafka/commit/3b5448006ab70ba2b0b5e177853d191d0f777452

The code might need to be rebased. The steps described in the KIP are a bit
involved. Other than that, the implementation might have a bug with respect
to converting arbitrary blacklist regexes to whitelist regexes since the
new consumer only accepts whitelists.

On Fri, Nov 10, 2017 at 11:36 AM, Jeff Widman  wrote:

> Re: migrating offsets for old Scala consumers.
>
> I work in the python world, so haven't directly used the old high level
> consumer, but from what I understand the underlying problem remains the
> migration of zookeeper offsets to the __consumer_offsets topic.
>
> We've used a slightly modified version of Grant Henke's script for
> migrating offsets here: https://github.com/apache/kafka/pull/2615
> It doesn't support rolling upgrades, but other than that it's great... I've
> used it for multiple migrations, and very thankful for the time Grant put
> into it.
>
> I don't know that it's worth pulling this into core, it might be, it might
> not be. But it probably is worth documenting the procedure at least
> somewhere.
>
> Personally, I suspect that those who absolutely need a rolling migration
> and cannot handle a short period of downtime while doing a migration
> probably have in-house experts on Kafka who are familiar with the issues
> and willing to figure out a solution. The rest of the world can generally
> handle a short maintenance window.
>
>
>
>
> On Fri, Nov 10, 2017 at 10:46 AM, Ismael Juma  wrote:
>
> > Hi Gwen,
> >
> > A KIP has been proposed, but it is stalled:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-125%3A+
> > ZookeeperConsumerConnector+to+KafkaConsumer+Migration+and+Rollback
> >
> > Unless the interested parties pick that up, we would drop support
> without a
> > rolling upgrade path. Users would be able to use the old consumers from
> > 1.1.x for a long time. The old Scala clients don't support the message
> > format introduced in 0.11.0, so the feature set is pretty much frozen and
> > there's little benefit in upgrading. But there is a cost in keeping them
> in
> > the codebase.
> >
> > Ismael
> >
> > On Fri, Nov 10, 2017 at 6:02 PM, Gwen Shapira  wrote:
> >
> > > Last time we tried deprecating the Scala consumer, there were concerns
> > > about a lack of upgrade path. There is no rolling upgrade, and
> migrating
> > > offsets is not trivial (and not documented).
> > >
> > > Did anything change in that regard? Or are we planning on dropping
> > support
> > > without an upgrade path?
> > >
> > >
> > > On Fri, Nov 10, 2017 at 5:37 PM Guozhang Wang 
> > wrote:
> > >
> > > > Thanks Ismael, the proposal looks good to me.
> > > >
> > > > A side note regarding: https://issues.apache.org/
> > jira/browse/KAFKA-5637,
> > > > could we resolve this ticket sooner than later to make clear about
> the
> > > code
> > > > deprecation and support duration when moving from 1.0.x to 2.0.x?
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Fri, Nov 10, 2017 at 3:44 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Features for 2.0.0 will be known after 1.1.0 is released in
> February
> > > > 2018.
> > > > > We are still doing the usual time-based release process[1].
> > > > >
> > > > > I am raising this well ahead of time because of the potential
> impact
> > of
> > > > > removing the old Scala clients (particularly the old high-level
> > > consumer)
> > > > > and dropping support for Java 7. Hopefully users can then plan
> > > > accordingly.
> > > > > We would do these changes in trunk soon after 1.1.0 is released
> > (around
> > > > > February).
> > > > >
> > > > > I think it makes sense to complete some of the work that was not
> > ready
> > > in
> > > > > time for 1.0.0 (Controller improvements and JBOD are two that come
> to
> > > > mind)
> > > > > in 1.1.0 (January 2018) and combined with the desire to give
> advance
> > > > > notice, June 2018 was the logical choice.
> > > > >
> > > > > There is no plan to support a particular release for longer. 1.x
> > versus
> > > > 2.x
> > > > > is no different than 0.10.x versus 0.11.x from the perspective of
> > > > > supporting older releases.
> > > > >
> > > > > [1] https://cwiki.apache.org/confluence/display/KAFKA/Time+
> > > > > Based+Release+Plan
> > > > >
> > > > > On Fri, Nov 10, 2017 at 11:21 AM, Jaikiran Pai <
> > > jai.forums2...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi Ismael,
> > > > > >
> > > > > > Are there any new features other than the language specific
> changes
> > > > that
> > > > > > are being planned for 2.0.0? Also, when 2.x gets released, will
> the
> > > 1.x
> > > > > > series see continued bug fixes and releases in the community or
> is
> > > the
> > > > > plan
> > > > > 

Re: [VOTE] KIP-159: Introducing Rich functions to Streams

2017-11-10 Thread Matthias J. Sax
I was thinking about the source stream/table idea once more and it seems
it would not be too hard to implement:

We add two new classes

  SourceKStream extends KStream

and

  SourceKTable extend KTable

and return both from StreamsBuilder#stream and StreamsBuilder#table

As both are sub-classes, this change is backward compatible. We change
the return type for any single-record transform to this new types, too,
and use KStream/KTable as return type for any multi-record operation.

The new RecordContext API is added to both new classes. For old classes,
we only implement KIP-149 to get access to the key.


WDYT?


-Matthias

On 11/9/17 9:13 PM, Jan Filipiak wrote:
> Okay,
> 
> looks like it would _at least work_ for Cached KTableSources .
> But we make it harder to the user to make mistakes by putting
> features into places where they don't make sense and don't
> help anyone.
> 
> I once again think that my suggestion is easier to implement and
> more correct. I will use this email to express my disagreement with the
> proposed KIP (-1 non binding of course) state that I am open for any
> questions
> regarding this. I will also do the usual thing and point out that the
> friends
> over at Hive got it correct aswell.
> One can not user their
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+VirtualColumns
> 
> in any place where its not read from the Sources.
> 
> With KSQl in mind it makes me sad how this is evolving here.
> 
> Best Jan
> 
> 
> 
> 
> 
> On 10.11.2017 01:06, Guozhang Wang wrote:
>> Hello Jan,
>>
>> Regarding your question about caching: today we keep the record context
>> with the cached entry already so when we flush the cache which may
>> generate
>> new records forwarding we will set the record context appropriately; and
>> then after the flush is completed we will reset the context to the record
>> before the flush happens. But I think when Jeyhun did the PR it is a good
>> time to double check on such stages to make sure we are not
>> introducing any
>> regressions.
>>
>>
>> Guozhang
>>
>>
>> On Mon, Nov 6, 2017 at 8:54 PM, Jan Filipiak 
>> wrote:
>>
>>> I Aggree completely.
>>>
>>> Exposing this information in a place where it has no _natural_ belonging
>>> might really be a bad blocker in the long run.
>>>
>>> Concerning your first point. I would argue its not to hard to have a
>>> user
>>> keep track of these. If we still don't want the user
>>> to keep track of these I would argue that all > projection only <
>>> transformations on a Source-backed KTable/KStream
>>> could also return a Ktable/KStream instance of the type we return
>>> from the
>>> topology builder.
>>> Only after any operation that exceeds projection or filter one would
>>> return a KTable not granting access to this any longer.
>>>
>>> Even then its difficult already: I never ran a topology with caching
>>> but I
>>> am not even 100% sure what the record Context means behind
>>> a materialized KTable with Caching? Topic and Partition are probably
>>> with
>>> some reasoning but offset is probably only the offset causing the flush?
>>> So one might aswell think to drop offsets from this RecordContext.
>>>
>>> Best Jan
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 07.11.2017 03:18, Guozhang Wang wrote:
>>>
 Regarding the API design (the proposed set of overloads v.s. one
 overload
 on #map to enrich the record), I think what we have represents a good
 trade-off between API succinctness and user convenience: on one hand we
 definitely want to keep as fewer overloaded functions as possible.
 But on
 the other hand if we only do that in, say, the #map() function then
 this
 enrichment could be an overkill: think of a topology that has 7
 operators
 in a chain, where users want to access the record context on
 operator #2
 and #6 only, with the "enrichment" manner they need to do the
 enrichment
 on
 operator #2 and keep it that way until #6. In addition, the
 RecordContext
 fields (topic, offset, etc) are really orthogonal to the key-value
 payloads
 themselves, so I think separating them into this object is a cleaner
 way.

 Regarding the RecordContext inheritance, this is actually a good point
 that
 have not been discussed thoroughly before. Here are my my two cents:
 one
 natural way would be to inherit the record context from the
 "triggering"
 record, for example in a join operator, if the record from stream A
 triggers the join then the record context is inherited from with that
 record. This is also aligned with the lower-level PAPI interface. A
 counter
 argument, though, would be that this is sort of leaking the internal
 implementations of the DSL, so that moving forward if we did some
 refactoring to our join implementations so that the triggering
 record can
 change, the RecordContext would also be different. I do not know how

Re: [DISCUSS] KIP-224: Add configuration parameters `retries` and `retry.backoff.ms` to Streams API

2017-11-10 Thread Bill Bejeck
Overall I'd agree with re-using the parameter for state lock retries.

Would there ever be a case where you'd need to have them be different
values?

On Fri, Nov 10, 2017 at 4:17 PM, Ted Yu  wrote:

> bq. it would be worth to reuse both parameters for those
>
> I agree.
>
> On Fri, Nov 10, 2017 at 1:14 PM, Matthias J. Sax 
> wrote:
>
> > Thanks for the feedback. Typos fixed.
> >
> > Damian explained already why we need the new strategy.
> >
> > @Kamal: many users don't want to retry but want to fail the Kafka Stream
> > instance in case of an error. All default parameters are chosen to
> > follow this pattern (similar to consumer/producer/broker defaults). The
> > KIP aims to allow users to reconfigure Kafka Streams to be resilient
> > against errors. It's a users choice to change configs to get better
> > resilience.
> >
> >
> > Update:
> >
> > While I was working on the PR, I realized that parameter
> > "retry.backoff.ms" is already available in StreamsConfig. I updated the
> > KIP accordingly.
> >
> > I also discovered, that we have a hard coded number of retries for state
> > locks -- I think, it would be worth to reuse both parameters for those,
> > too. WDYT?
> >
> > Here is the current PR: https://github.com/apache/kafka/pull/4206
> >
> >
> > -Matthias
> >
> >
> >
> > On 11/9/17 2:29 PM, Guozhang Wang wrote:
> > > Damian,
> > >
> > > You are right! I was dreaming at the wrong class :)
> > >
> > > Guozhang
> > >
> > >
> > > On Thu, Nov 9, 2017 at 11:27 AM, Damian Guy 
> > wrote:
> > >
> > >> Guozhang, i'm not sure i follow... Global stores aren't per task, they
> > are
> > >> per application instance and should be fully restored before the
> stream
> > >> threads start processing. They don't go through a rebalance as it is
> > manual
> > >> assignment of all partitions in the topic.
> > >>
> > >> On Thu, 9 Nov 2017 at 17:43 Guozhang Wang  wrote:
> > >>
> > >>> Instead of restoring the global store during registration, could we
> > also
> > >> do
> > >>> this after the rebalance callback as in the main loop? By doing this
> we
> > >> can
> > >>> effectively swallow-and-retry-in-next-loop as we did for non-global
> > >> stores.
> > >>> Since global stores are per task not per thread, we would not process
> > the
> > >>> task after the global store is bootstrapped fully.
> > >>>
> > >>>
> > >>> Guozhang
> > >>>
> > >>> On Thu, Nov 9, 2017 at 7:08 AM, Bill Bejeck 
> wrote:
> > >>>
> >  Thanks for the KIP Matthias, +1 from me.
> > 
> > 
> >  -Bill
> > 
> >  On Thu, Nov 9, 2017 at 8:40 AM, Ted Yu  wrote:
> > 
> > > lgtm
> > >
> > > bq. pass both parameter
> > >
> > > parameter should be in plural.
> > > Same with 'two new configuration parameter'
> > >
> > > Cheers
> > >
> > > On Thu, Nov 9, 2017 at 4:20 AM, Damian Guy 
> > >>> wrote:
> > >
> > >> Thanks Matthias, LGTM
> > >>
> > >> On Thu, 9 Nov 2017 at 11:13 Matthias J. Sax <
> matth...@confluent.io
> > >>>
> > > wrote:
> > >>
> > >>> Hi,
> > >>>
> > >>> I want to propose a new KIP to make Streams API more resilient to
> > > broker
> > >>> disconnections.
> > >>>
> > >>>
> > >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> 224%3A+Add+configuration+parameters+%60retries%60+and+%
> > >> 60retry.backoff.ms%60+to+Streams+API
> > >>>
> > >>>
> > >>> -Matthias
> > >>>
> > >>>
> > >>
> > >
> > 
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> -- Guozhang
> > >>>
> > >>
> > >
> > >
> > >
> >
> >
>


Re: [DISCUSS] KIP-224: Add configuration parameters `retries` and `retry.backoff.ms` to Streams API

2017-11-10 Thread Matthias J. Sax
One more change: as parameter is called "retries", default value should
be zero (instead of one).

-Matthias

On 11/10/17 1:14 PM, Matthias J. Sax wrote:
> Thanks for the feedback. Typos fixed.
> 
> Damian explained already why we need the new strategy.
> 
> @Kamal: many users don't want to retry but want to fail the Kafka Stream
> instance in case of an error. All default parameters are chosen to
> follow this pattern (similar to consumer/producer/broker defaults). The
> KIP aims to allow users to reconfigure Kafka Streams to be resilient
> against errors. It's a users choice to change configs to get better
> resilience.
> 
> 
> Update:
> 
> While I was working on the PR, I realized that parameter
> "retry.backoff.ms" is already available in StreamsConfig. I updated the
> KIP accordingly.
> 
> I also discovered, that we have a hard coded number of retries for state
> locks -- I think, it would be worth to reuse both parameters for those,
> too. WDYT?
> 
> Here is the current PR: https://github.com/apache/kafka/pull/4206
> 
> 
> -Matthias
> 
> 
> 
> On 11/9/17 2:29 PM, Guozhang Wang wrote:
>> Damian,
>>
>> You are right! I was dreaming at the wrong class :)
>>
>> Guozhang
>>
>>
>> On Thu, Nov 9, 2017 at 11:27 AM, Damian Guy  wrote:
>>
>>> Guozhang, i'm not sure i follow... Global stores aren't per task, they are
>>> per application instance and should be fully restored before the stream
>>> threads start processing. They don't go through a rebalance as it is manual
>>> assignment of all partitions in the topic.
>>>
>>> On Thu, 9 Nov 2017 at 17:43 Guozhang Wang  wrote:
>>>
 Instead of restoring the global store during registration, could we also
>>> do
 this after the rebalance callback as in the main loop? By doing this we
>>> can
 effectively swallow-and-retry-in-next-loop as we did for non-global
>>> stores.
 Since global stores are per task not per thread, we would not process the
 task after the global store is bootstrapped fully.


 Guozhang

 On Thu, Nov 9, 2017 at 7:08 AM, Bill Bejeck  wrote:

> Thanks for the KIP Matthias, +1 from me.
>
>
> -Bill
>
> On Thu, Nov 9, 2017 at 8:40 AM, Ted Yu  wrote:
>
>> lgtm
>>
>> bq. pass both parameter
>>
>> parameter should be in plural.
>> Same with 'two new configuration parameter'
>>
>> Cheers
>>
>> On Thu, Nov 9, 2017 at 4:20 AM, Damian Guy 
 wrote:
>>
>>> Thanks Matthias, LGTM
>>>
>>> On Thu, 9 Nov 2017 at 11:13 Matthias J. Sax > wrote:
>>>
 Hi,

 I want to propose a new KIP to make Streams API more resilient to
>> broker
 disconnections.


 https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>> 224%3A+Add+configuration+parameters+%60retries%60+and+%
>>> 60retry.backoff.ms%60+to+Streams+API


 -Matthias


>>>
>>
>



 --
 -- Guozhang

>>>
>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-224: Add configuration parameters `retries` and `retry.backoff.ms` to Streams API

2017-11-10 Thread Ted Yu
bq. it would be worth to reuse both parameters for those

I agree.

On Fri, Nov 10, 2017 at 1:14 PM, Matthias J. Sax 
wrote:

> Thanks for the feedback. Typos fixed.
>
> Damian explained already why we need the new strategy.
>
> @Kamal: many users don't want to retry but want to fail the Kafka Stream
> instance in case of an error. All default parameters are chosen to
> follow this pattern (similar to consumer/producer/broker defaults). The
> KIP aims to allow users to reconfigure Kafka Streams to be resilient
> against errors. It's a users choice to change configs to get better
> resilience.
>
>
> Update:
>
> While I was working on the PR, I realized that parameter
> "retry.backoff.ms" is already available in StreamsConfig. I updated the
> KIP accordingly.
>
> I also discovered, that we have a hard coded number of retries for state
> locks -- I think, it would be worth to reuse both parameters for those,
> too. WDYT?
>
> Here is the current PR: https://github.com/apache/kafka/pull/4206
>
>
> -Matthias
>
>
>
> On 11/9/17 2:29 PM, Guozhang Wang wrote:
> > Damian,
> >
> > You are right! I was dreaming at the wrong class :)
> >
> > Guozhang
> >
> >
> > On Thu, Nov 9, 2017 at 11:27 AM, Damian Guy 
> wrote:
> >
> >> Guozhang, i'm not sure i follow... Global stores aren't per task, they
> are
> >> per application instance and should be fully restored before the stream
> >> threads start processing. They don't go through a rebalance as it is
> manual
> >> assignment of all partitions in the topic.
> >>
> >> On Thu, 9 Nov 2017 at 17:43 Guozhang Wang  wrote:
> >>
> >>> Instead of restoring the global store during registration, could we
> also
> >> do
> >>> this after the rebalance callback as in the main loop? By doing this we
> >> can
> >>> effectively swallow-and-retry-in-next-loop as we did for non-global
> >> stores.
> >>> Since global stores are per task not per thread, we would not process
> the
> >>> task after the global store is bootstrapped fully.
> >>>
> >>>
> >>> Guozhang
> >>>
> >>> On Thu, Nov 9, 2017 at 7:08 AM, Bill Bejeck  wrote:
> >>>
>  Thanks for the KIP Matthias, +1 from me.
> 
> 
>  -Bill
> 
>  On Thu, Nov 9, 2017 at 8:40 AM, Ted Yu  wrote:
> 
> > lgtm
> >
> > bq. pass both parameter
> >
> > parameter should be in plural.
> > Same with 'two new configuration parameter'
> >
> > Cheers
> >
> > On Thu, Nov 9, 2017 at 4:20 AM, Damian Guy 
> >>> wrote:
> >
> >> Thanks Matthias, LGTM
> >>
> >> On Thu, 9 Nov 2017 at 11:13 Matthias J. Sax  >>>
> > wrote:
> >>
> >>> Hi,
> >>>
> >>> I want to propose a new KIP to make Streams API more resilient to
> > broker
> >>> disconnections.
> >>>
> >>>
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 224%3A+Add+configuration+parameters+%60retries%60+and+%
> >> 60retry.backoff.ms%60+to+Streams+API
> >>>
> >>>
> >>> -Matthias
> >>>
> >>>
> >>
> >
> 
> >>>
> >>>
> >>>
> >>> --
> >>> -- Guozhang
> >>>
> >>
> >
> >
> >
>
>


Re: [DISCUSS] KIP-224: Add configuration parameters `retries` and `retry.backoff.ms` to Streams API

2017-11-10 Thread Matthias J. Sax
Thanks for the feedback. Typos fixed.

Damian explained already why we need the new strategy.

@Kamal: many users don't want to retry but want to fail the Kafka Stream
instance in case of an error. All default parameters are chosen to
follow this pattern (similar to consumer/producer/broker defaults). The
KIP aims to allow users to reconfigure Kafka Streams to be resilient
against errors. It's a users choice to change configs to get better
resilience.


Update:

While I was working on the PR, I realized that parameter
"retry.backoff.ms" is already available in StreamsConfig. I updated the
KIP accordingly.

I also discovered, that we have a hard coded number of retries for state
locks -- I think, it would be worth to reuse both parameters for those,
too. WDYT?

Here is the current PR: https://github.com/apache/kafka/pull/4206


-Matthias



On 11/9/17 2:29 PM, Guozhang Wang wrote:
> Damian,
> 
> You are right! I was dreaming at the wrong class :)
> 
> Guozhang
> 
> 
> On Thu, Nov 9, 2017 at 11:27 AM, Damian Guy  wrote:
> 
>> Guozhang, i'm not sure i follow... Global stores aren't per task, they are
>> per application instance and should be fully restored before the stream
>> threads start processing. They don't go through a rebalance as it is manual
>> assignment of all partitions in the topic.
>>
>> On Thu, 9 Nov 2017 at 17:43 Guozhang Wang  wrote:
>>
>>> Instead of restoring the global store during registration, could we also
>> do
>>> this after the rebalance callback as in the main loop? By doing this we
>> can
>>> effectively swallow-and-retry-in-next-loop as we did for non-global
>> stores.
>>> Since global stores are per task not per thread, we would not process the
>>> task after the global store is bootstrapped fully.
>>>
>>>
>>> Guozhang
>>>
>>> On Thu, Nov 9, 2017 at 7:08 AM, Bill Bejeck  wrote:
>>>
 Thanks for the KIP Matthias, +1 from me.


 -Bill

 On Thu, Nov 9, 2017 at 8:40 AM, Ted Yu  wrote:

> lgtm
>
> bq. pass both parameter
>
> parameter should be in plural.
> Same with 'two new configuration parameter'
>
> Cheers
>
> On Thu, Nov 9, 2017 at 4:20 AM, Damian Guy 
>>> wrote:
>
>> Thanks Matthias, LGTM
>>
>> On Thu, 9 Nov 2017 at 11:13 Matthias J. Sax >>
> wrote:
>>
>>> Hi,
>>>
>>> I want to propose a new KIP to make Streams API more resilient to
> broker
>>> disconnections.
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 224%3A+Add+configuration+parameters+%60retries%60+and+%
>> 60retry.backoff.ms%60+to+Streams+API
>>>
>>>
>>> -Matthias
>>>
>>>
>>
>

>>>
>>>
>>>
>>> --
>>> -- Guozhang
>>>
>>
> 
> 
> 



signature.asc
Description: OpenPGP digital signature


[GitHub] kafka pull request #4206: KAFKA-6122: Global Consumer should handle TimeoutE...

2017-11-10 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4206

KAFKA-6122: Global Consumer should handle TimeoutException

Implements KIP-224:
- adding new StreamsConfig `retires`
- uses `retires` and `retry.backoff.ms` to handle TimeoutException in 
GlobalStateManager
- adds two new tests to trigger TimeoutException in global consumer
- uses `retires` and `retry.backoff.ms` for state directory locking
- some minor code cleanup to reduce number of parameters we need to pass 
around

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-6122-global-consumer-timeout-exception

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4206.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4206


commit 8ed6197ad96d359dace78f60b0fbbf080eca5e09
Author: Matthias J. Sax 
Date:   2017-11-10T21:02:46Z

KAFKA-6122: Global Consumer should handle TimeoutException




---


Build failed in Jenkins: kafka-0.11.0-jdk7 #334

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Remove note in docs that is no longer required

--
[...truncated 977.69 KB...]
kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection STARTED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection PASSED

kafka.controller.ControllerIntegrationTest > testTopicCreation STARTED

kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > 

Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-10 Thread Colin McCabe
+1 for returning a named object rather than Long.

C.


On Fri, Nov 10, 2017, at 10:07, Guozhang Wang wrote:
> Sounds good to me for returning an object than a Long type.
>
>
> Guozhang
>
> On Fri, Nov 10, 2017 at 9:26 AM, Paolo Patierno 
> wrote:
>
> > Ismael,
> >
> >
> > yes it makes sense. Taking a look to the other methods in the Admin> > 
> > Client, there is no use case returning a "simple" type : most of
> > them are> > Void or complex result represented by classes.
> >
> > In order to support future extensibility I like your idea.
> >
> > Let's see what's the others opinions otherwise I'll start to
> > implement in> > such way.
> >
> >
> > I have updated the KIP and the PR using "recordsToDelete"
> > parameter as> > well.
> >
> >
> > Thanks
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: isma...@gmail.com  on behalf of Ismael
> > Juma <> > ism...@juma.me.uk>
> > Sent: Friday, November 10, 2017 1:15 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [VOTE] KIP-204 : adding records deletion operation to
> > the new> > Admin Client API
> >
> > Paolo,
> >
> > You might return the timestamp if the user did a delete by
> > timestamp for> > example. But let's maybe hear what others think before we 
> > change
> > the KIP.> >
> > Ismael
> >
> > On Fri, Nov 10, 2017 at 12:48 PM, Paolo Patierno
> > > > wrote:
> >
> > > Hi Ismael,
> > >
> > >
> > >   1.  yes it sounds better. Agree.
> > >   2.  We are using a wrapper class like RecordsToDelete in order
> > >   to allow> > > future operations that won't be based only on 
> > > specifying an
> > > offset. At> > same
> > > time with this wrapper class and static methods (i.e.
> > > beforeOffset) the> > API
> > > is really clear to understand. About moving to use a class for
> > > wrapping> > the
> > > lowWatermark and not just a Long, do you think that in the
> > > future we> > could
> > > have some other information to bring as part of the delete
> > > operation ? or> > > just for being clearer in terms of API (so not just 
> > > with a
> > > comment) ?> > >
> > > Thanks,
> > > Paolo.
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno> > > 
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: isma...@gmail.com  on behalf of Ismael
> > > Juma <> > > ism...@juma.me.uk>
> > > Sent: Friday, November 10, 2017 12:33 PM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [VOTE] KIP-204 : adding records deletion operation
> > > to the> > new
> > > Admin Client API
> > >
> > > Hi Paolo,
> > >
> > > The KIP looks good, I have a couple of comments:
> > >
> > > 1. partitionsAndOffsets could perhaps be `recordsToDelete`.
> > > 2. It seems a bit inconsistent that the argument is
> > >`RecordsToDelete`,> > but
> > > the result is just a `Long`. Should the result be
> > > `DeleteRecords` or> > > something like that? It could then have a field 
> > > logStartOffset or> > > lowWatermark instead of having to document it via 
> > > a comment only.> > >
> > > Ismael
> > >
> > > On Tue, Oct 3, 2017 at 6:51 AM, Paolo Patierno
> > > > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I didn't see any further discussion around this KIP, so I'd like
> > > > to> > start
> > > > the vote for it.
> > > >
> > > > Just for reference : https://cwiki.apache.org/confl
> > > > uence/display/KAFKA/KIP-204+%3A+adding+records+deletion+
> > > > operation+to+the+new+Admin+Client+API
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Paolo Patierno
> > > > Senior Software Engineer (IoT) @ Red Hat
> > > > Microsoft MVP on Azure & IoT
> > > > Microsoft Azure Advisor
> > > >
> > > > Twitter : @ppatierno
> > > > Linkedin : paolopatierno
> > > > > > > > Blog : 
> > > > DevExperience
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang



Re: [DISCUSS] KIP-220: Add AdminClient into Kafka Streams' ClientSupplier

2017-11-10 Thread Bill Bejeck
I'm leaning towards option 3, although option 2 is a reasonable tradeoff
between the two.

Overall I'm leaning towards option 3 because:

   1. As Guozhang has said we are failing "fast enough" with an Exception
   from the first rebalance.
   2. Less complexity/maintenance cost by not having a transient network
   client
   3. Ideally, this check should be on the AdminClient itself but adding
   such a check creates "scope creep" for this KIP.

IMHO the combination of these reasons makes option 3 my preferred approach.

Thanks,
Bill

On Tue, Nov 7, 2017 at 12:48 PM, Guozhang Wang  wrote:

> Here is what I'm thinking about this trade-off: we want to fail fast when
> brokers do not yet support the requested API version, with the cost that we
> need to do this one-time thing with an expensed NetworkClient plus a bit
> longer starting up latency. Here are a few different options:
>
> 1) we ask all the brokers: this is one extreme of the trade-off, we still
> need to handle UnsupportedApiVersionsException since brokers can
> downgrade.
> 2) we ask a random broker: this is what we did, and a bit weaker than 1)
> but saves on latency.
> 3) we do not ask anyone: this is the other extreme of the trade-off.
>
> To me I think 1) is an overkill, so I did not include that in my proposals.
> Between 2) and 3) I'm slightly preferring 3) since even under this case we
> are sort of failing fast because we will throw the exception at the first
> rebalance, but I can still see the value of option 2). Ideally we can have
> some APIs from AdminClient to check API versions but this does not exist
> today and I do not want to drag too long with growing scope on this KIP, so
> the current proposal's implementation uses expendable Network which is a
> bit fuzzy.
>
>
> Guozhang
>
>
> On Tue, Nov 7, 2017 at 2:07 AM, Matthias J. Sax 
> wrote:
>
> > I would prefer to keep the current check. We could even improve it, and
> > do the check to more than one brokers (even all provided
> > bootstrap.servers) or some random servers after we got all meta data
> > about the cluster.
> >
> >
> > -Matthias
> >
> > On 11/7/17 1:01 AM, Guozhang Wang wrote:
> > > Hello folks,
> > >
> > > One update I'd like to propose regarding "compatibility checking":
> > > currently we create a single StreamsKafkaClient at the beginning to
> issue
> > > an ApiVersionsRequest to a random broker and then check on its
> versions,
> > > and fail if it does not satisfy the version (0.10.1+ without EOS,
> 0.11.0
> > > with EOS); after this check we throw this client away. My original plan
> > is
> > > to replace this logic with the NetworkClient's own apiVersions, but
> after
> > > some digging while working on the PR I found that exposing this
> > apiVersions
> > > variable from NetworkClient through AdminClient is not very straight
> > > forward, plus it would need an API change on the AdminClient itself as
> > well
> > > to expose the versions information.
> > >
> > > On the other hand, this one-time compatibility checking's benefit may
> be
> > > not significant: 1) it asks a random broker upon starting up, and hence
> > > does not guarantee all broker's support the corresponding API versions
> at
> > > that time; 2) brokers can still be upgraded / downgraded after the
> > streams
> > > app is up and running, and hence we still need to handle
> > > UnsupportedVersionExceptions thrown from the producer / consumer /
> admin
> > > client during the runtime anyways.
> > >
> > > So I'd like to propose two options in this KIP:
> > >
> > > 1) we remove this one-time compatibility check on Streams starting up
> in
> > > this KIP, and solely rely on handling producer / consumer / admin
> > client's
> > > API UnsupportedVersionException throwable exceptions. Please share your
> > > thoughts about this.
> > >
> > > 2) we create a one-time NetworkClient upon starting up, send the
> > > ApiVersionsRequest and get the response and do the checking; after that
> > > throw this client away.
> > >
> > > Please let me know what do you think. Thanks!
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Mon, Nov 6, 2017 at 7:56 AM, Matthias J. Sax  >
> > > wrote:
> > >
> > >> Thanks for the update and clarification.
> > >>
> > >> Sounds good to me :)
> > >>
> > >>
> > >> -Matthias
> > >>
> > >>
> > >>
> > >> On 11/6/17 12:16 AM, Guozhang Wang wrote:
> > >>> Thanks Matthias,
> > >>>
> > >>> 1) Updated the KIP page to include KAFKA-6126.
> > >>> 2) For passing configs, I agree, will make a pass over the existing
> > >> configs
> > >>> passed to StreamsKafkaClient and update the wiki page accordingly, to
> > >>> capture all changes that would happen for the replacement in this
> > single
> > >>> KIP.
> > >>> 3) For internal topic purging, I'm not sure if we need to include
> this
> > >> as a
> > >>> public change since internal topics are meant for abstracted away
> from
> > >> the
> > >>> Streams users; they should not 

Re: [DISCUSS] Kafka 2.0.0 in June 2018

2017-11-10 Thread Jeff Widman
Re: migrating offsets for old Scala consumers.

I work in the python world, so haven't directly used the old high level
consumer, but from what I understand the underlying problem remains the
migration of zookeeper offsets to the __consumer_offsets topic.

We've used a slightly modified version of Grant Henke's script for
migrating offsets here: https://github.com/apache/kafka/pull/2615
It doesn't support rolling upgrades, but other than that it's great... I've
used it for multiple migrations, and very thankful for the time Grant put
into it.

I don't know that it's worth pulling this into core, it might be, it might
not be. But it probably is worth documenting the procedure at least
somewhere.

Personally, I suspect that those who absolutely need a rolling migration
and cannot handle a short period of downtime while doing a migration
probably have in-house experts on Kafka who are familiar with the issues
and willing to figure out a solution. The rest of the world can generally
handle a short maintenance window.




On Fri, Nov 10, 2017 at 10:46 AM, Ismael Juma  wrote:

> Hi Gwen,
>
> A KIP has been proposed, but it is stalled:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-125%3A+
> ZookeeperConsumerConnector+to+KafkaConsumer+Migration+and+Rollback
>
> Unless the interested parties pick that up, we would drop support without a
> rolling upgrade path. Users would be able to use the old consumers from
> 1.1.x for a long time. The old Scala clients don't support the message
> format introduced in 0.11.0, so the feature set is pretty much frozen and
> there's little benefit in upgrading. But there is a cost in keeping them in
> the codebase.
>
> Ismael
>
> On Fri, Nov 10, 2017 at 6:02 PM, Gwen Shapira  wrote:
>
> > Last time we tried deprecating the Scala consumer, there were concerns
> > about a lack of upgrade path. There is no rolling upgrade, and migrating
> > offsets is not trivial (and not documented).
> >
> > Did anything change in that regard? Or are we planning on dropping
> support
> > without an upgrade path?
> >
> >
> > On Fri, Nov 10, 2017 at 5:37 PM Guozhang Wang 
> wrote:
> >
> > > Thanks Ismael, the proposal looks good to me.
> > >
> > > A side note regarding: https://issues.apache.org/
> jira/browse/KAFKA-5637,
> > > could we resolve this ticket sooner than later to make clear about the
> > code
> > > deprecation and support duration when moving from 1.0.x to 2.0.x?
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Fri, Nov 10, 2017 at 3:44 AM, Ismael Juma 
> wrote:
> > >
> > > > Features for 2.0.0 will be known after 1.1.0 is released in February
> > > 2018.
> > > > We are still doing the usual time-based release process[1].
> > > >
> > > > I am raising this well ahead of time because of the potential impact
> of
> > > > removing the old Scala clients (particularly the old high-level
> > consumer)
> > > > and dropping support for Java 7. Hopefully users can then plan
> > > accordingly.
> > > > We would do these changes in trunk soon after 1.1.0 is released
> (around
> > > > February).
> > > >
> > > > I think it makes sense to complete some of the work that was not
> ready
> > in
> > > > time for 1.0.0 (Controller improvements and JBOD are two that come to
> > > mind)
> > > > in 1.1.0 (January 2018) and combined with the desire to give advance
> > > > notice, June 2018 was the logical choice.
> > > >
> > > > There is no plan to support a particular release for longer. 1.x
> versus
> > > 2.x
> > > > is no different than 0.10.x versus 0.11.x from the perspective of
> > > > supporting older releases.
> > > >
> > > > [1] https://cwiki.apache.org/confluence/display/KAFKA/Time+
> > > > Based+Release+Plan
> > > >
> > > > On Fri, Nov 10, 2017 at 11:21 AM, Jaikiran Pai <
> > jai.forums2...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi Ismael,
> > > > >
> > > > > Are there any new features other than the language specific changes
> > > that
> > > > > are being planned for 2.0.0? Also, when 2.x gets released, will the
> > 1.x
> > > > > series see continued bug fixes and releases in the community or is
> > the
> > > > plan
> > > > > to have one single main version that gets continuous updates and
> > > > releases?
> > > > >
> > > > > By the way, why June 2018? :)
> > > > >
> > > > > -Jaikiran
> > > > >
> > > > >
> > > > >
> > > > > On 09/11/17 3:14 PM, Ismael Juma wrote:
> > > > >
> > > > >> Hi all,
> > > > >>
> > > > >> I'm starting this discussion early because of the potential
> impact.
> > > > >>
> > > > >> Kafka 1.0.0 was just released and the focus was on achieving the
> > > > original
> > > > >> project vision in terms of features provided while maintaining
> > > > >> compatibility for the most part (i.e. we did not remove deprecated
> > > > >> components like the Scala clients).
> > > > >>
> > > > >> This was the right decision, in my opinion, but it's time to start
> > > > >> thinking
> > > > >> about 

[jira] [Created] (KAFKA-6202) Classes OffsetsMessageFormatter and GroupMetadataMessageFormatter shall be used by kafka tools, but in the last releases lost visibility

2017-11-10 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-6202:
-

 Summary: Classes OffsetsMessageFormatter and 
GroupMetadataMessageFormatter shall be used by kafka tools, but in the last 
releases lost visibility
 Key: KAFKA-6202
 URL: https://issues.apache.org/jira/browse/KAFKA-6202
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0, 0.11.0.0, 0.11.0.2, 0.11.0.3
Reporter: Seweryn Habdank-Wojewodzki


Classes OffsetsMessageFormatter and GroupMetadataMessageFormatter shall be 
visible.

Also they shall/might be used by external toolis like console-consumer.
What is documented in the code!

But in last releases those two classes lost its visibility.

Currently they have none access modifier, which for subclasses are interpreted 
as private.
The proposal is to follow comments and use cases driven by console-consumer and 
change visibility from none/default to public.

Issue was found during discussion in SO: 
https://stackoverflow.com/questions/47218277/what-may-cause-huge-load-in-kafka-consumer-offsets-topic




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6201) Provide more context in Kafka Connect REST error messages

2017-11-10 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-6201:


 Summary: Provide more context in Kafka Connect REST error messages
 Key: KAFKA-6201
 URL: https://issues.apache.org/jira/browse/KAFKA-6201
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.0.0
Reporter: Randall Hauch


It is possible that connectors with some characters in their names cause 
problems, and these are being addressed in KAFKA-4827 and KAFKA-4930.

{code}
$ curl connect1:8083/connectors
[]
$ cat /usr/local/etc/connector-source-jdbc-properties.json {
  "name" : "JDBC Source Connector",
  "config" : {
"connector.class" : "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max" : "1",
"connection.url" : "jdbc:sqlite:/usr/local/lib/test.db",
"mode" : "incrementing",
"incrementing.column.name" : "id",
"topic.prefix" : "jdbc-"
  }
}
$ curl -X POST -H "Content-Type: application/json" \--data 
@/usr/local/etc/connector-source-jdbc-properties.json 
http://connect1:8083/connectors



Error 500 


HTTP ERROR: 500
Problem accessing /connectors. Reason:
Request failed.
Powered by Jetty://


$ curl connect1:8083/connectors["JDBC Source Connector"]
{code}

This JIRA is about the meta point of error logging in general. The only 
information returned from the REST call was {{Error 500}} with a poor message, 
whereas what users really need is better error logging that guides users to 
better understanding where the problem is. A much more usable message needs to 
be provided in the return of the REST call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Kafka 2.0.0 in June 2018

2017-11-10 Thread Ismael Juma
Hi Gwen,

A KIP has been proposed, but it is stalled:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-125%3A+
ZookeeperConsumerConnector+to+KafkaConsumer+Migration+and+Rollback

Unless the interested parties pick that up, we would drop support without a
rolling upgrade path. Users would be able to use the old consumers from
1.1.x for a long time. The old Scala clients don't support the message
format introduced in 0.11.0, so the feature set is pretty much frozen and
there's little benefit in upgrading. But there is a cost in keeping them in
the codebase.

Ismael

On Fri, Nov 10, 2017 at 6:02 PM, Gwen Shapira  wrote:

> Last time we tried deprecating the Scala consumer, there were concerns
> about a lack of upgrade path. There is no rolling upgrade, and migrating
> offsets is not trivial (and not documented).
>
> Did anything change in that regard? Or are we planning on dropping support
> without an upgrade path?
>
>
> On Fri, Nov 10, 2017 at 5:37 PM Guozhang Wang  wrote:
>
> > Thanks Ismael, the proposal looks good to me.
> >
> > A side note regarding: https://issues.apache.org/jira/browse/KAFKA-5637,
> > could we resolve this ticket sooner than later to make clear about the
> code
> > deprecation and support duration when moving from 1.0.x to 2.0.x?
> >
> >
> > Guozhang
> >
> >
> > On Fri, Nov 10, 2017 at 3:44 AM, Ismael Juma  wrote:
> >
> > > Features for 2.0.0 will be known after 1.1.0 is released in February
> > 2018.
> > > We are still doing the usual time-based release process[1].
> > >
> > > I am raising this well ahead of time because of the potential impact of
> > > removing the old Scala clients (particularly the old high-level
> consumer)
> > > and dropping support for Java 7. Hopefully users can then plan
> > accordingly.
> > > We would do these changes in trunk soon after 1.1.0 is released (around
> > > February).
> > >
> > > I think it makes sense to complete some of the work that was not ready
> in
> > > time for 1.0.0 (Controller improvements and JBOD are two that come to
> > mind)
> > > in 1.1.0 (January 2018) and combined with the desire to give advance
> > > notice, June 2018 was the logical choice.
> > >
> > > There is no plan to support a particular release for longer. 1.x versus
> > 2.x
> > > is no different than 0.10.x versus 0.11.x from the perspective of
> > > supporting older releases.
> > >
> > > [1] https://cwiki.apache.org/confluence/display/KAFKA/Time+
> > > Based+Release+Plan
> > >
> > > On Fri, Nov 10, 2017 at 11:21 AM, Jaikiran Pai <
> jai.forums2...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Ismael,
> > > >
> > > > Are there any new features other than the language specific changes
> > that
> > > > are being planned for 2.0.0? Also, when 2.x gets released, will the
> 1.x
> > > > series see continued bug fixes and releases in the community or is
> the
> > > plan
> > > > to have one single main version that gets continuous updates and
> > > releases?
> > > >
> > > > By the way, why June 2018? :)
> > > >
> > > > -Jaikiran
> > > >
> > > >
> > > >
> > > > On 09/11/17 3:14 PM, Ismael Juma wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> I'm starting this discussion early because of the potential impact.
> > > >>
> > > >> Kafka 1.0.0 was just released and the focus was on achieving the
> > > original
> > > >> project vision in terms of features provided while maintaining
> > > >> compatibility for the most part (i.e. we did not remove deprecated
> > > >> components like the Scala clients).
> > > >>
> > > >> This was the right decision, in my opinion, but it's time to start
> > > >> thinking
> > > >> about 2.0.0, which is an opportunity for us to remove major
> deprecated
> > > >> components and to benefit from Java 8 language enhancements (so that
> > we
> > > >> can
> > > >> move faster). So, I propose the following for Kafka 2.0.0:
> > > >>
> > > >> 1. It should be released in June 2018
> > > >> 2. The Scala clients (Consumer, SimpleConsumer, Producer,
> > SyncProducer)
> > > >> will be removed
> > > >> 3. Java 8 or higher will be required, i.e. support for Java 7 will
> be
> > > >> dropped.
> > > >>
> > > >> Thoughts?
> > > >>
> > > >> Ismael
> > > >>
> > > >>
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: Add to contributors list

2017-11-10 Thread Jun Rao
Hi, Arjun,

Thanks for your interest. Added you to the contributor list.

Jun

On Fri, Nov 10, 2017 at 10:33 AM, Arjun Satish 
wrote:

> Hi,
>
> Could you please add me to the contributors list for Apache Kafka?
>
> My Apache username is wicknicks, and the profile page is located here (
> https://issues.apache.org/jira/secure/ViewProfile.jspa?name=wicknicks).
>
> Thanks very much,
> Arjun
>


[jira] [Resolved] (KAFKA-6175) AbstractIndex should cache index file to avoid unnecessary disk access during resize()

2017-11-10 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-6175.

Resolution: Fixed

> AbstractIndex should cache index file to avoid unnecessary disk access during 
> resize()
> --
>
> Key: KAFKA-6175
> URL: https://issues.apache.org/jira/browse/KAFKA-6175
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.0.1
>
>
> Currently when we shutdown a broker, we will call AbstractIndex.resize() for 
> all segments on the broker, regardless of whether the log segment is active 
> or not. AbstractIndex.resize() incurs raf.setLength(), which is expensive 
> because it accesses disks. If we do a threaddump during either 
> LogManger.shutdown() or LogManager.loadLogs(), most threads are in RUNNABLE 
> state at java.io.RandomAccessFile.setLength().
> This patch intends to speed up broker startup and shutdown time by skipping 
> AbstractIndex.resize() for inactive log segments.
> Here is the time of LogManager.shutdown() in various settings. In all these 
> tests, broker has roughly 6k partitions and 19k segments.
> - If broker does not have this patch and KAFKA-6172, LogManager.shutdown() 
> takes 69 seconds
> - If broker has KAFKA-6172 but not this patch, LogManager.shutdown() takes 21 
> seconds.
> - If broker has KAFKA-6172 and this patch, LogManager.shutdown() takes 1.6 
> seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Add to contributors list

2017-11-10 Thread Arjun Satish
Hi,

Could you please add me to the contributors list for Apache Kafka?

My Apache username is wicknicks, and the profile page is located here (
https://issues.apache.org/jira/secure/ViewProfile.jspa?name=wicknicks).

Thanks very much,
Arjun


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-10 Thread Guozhang Wang
Sounds good to me for returning an object than a Long type.


Guozhang

On Fri, Nov 10, 2017 at 9:26 AM, Paolo Patierno  wrote:

> Ismael,
>
>
> yes it makes sense. Taking a look to the other methods in the Admin
> Client, there is no use case returning a "simple" type : most of them are
> Void or complex result represented by classes.
>
> In order to support future extensibility I like your idea.
>
> Let's see what's the others opinions otherwise I'll start to implement in
> such way.
>
>
> I have updated the KIP and the PR using "recordsToDelete" parameter as
> well.
>
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: isma...@gmail.com  on behalf of Ismael Juma <
> ism...@juma.me.uk>
> Sent: Friday, November 10, 2017 1:15 PM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new
> Admin Client API
>
> Paolo,
>
> You might return the timestamp if the user did a delete by timestamp for
> example. But let's maybe hear what others think before we change the KIP.
>
> Ismael
>
> On Fri, Nov 10, 2017 at 12:48 PM, Paolo Patierno 
> wrote:
>
> > Hi Ismael,
> >
> >
> >   1.  yes it sounds better. Agree.
> >   2.  We are using a wrapper class like RecordsToDelete in order to allow
> > future operations that won't be based only on specifying an offset. At
> same
> > time with this wrapper class and static methods (i.e. beforeOffset) the
> API
> > is really clear to understand. About moving to use a class for wrapping
> the
> > lowWatermark and not just a Long, do you think that in the future we
> could
> > have some other information to bring as part of the delete operation ? or
> > just for being clearer in terms of API (so not just with a comment) ?
> >
> > Thanks,
> > Paolo.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: isma...@gmail.com  on behalf of Ismael Juma <
> > ism...@juma.me.uk>
> > Sent: Friday, November 10, 2017 12:33 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the
> new
> > Admin Client API
> >
> > Hi Paolo,
> >
> > The KIP looks good, I have a couple of comments:
> >
> > 1. partitionsAndOffsets could perhaps be `recordsToDelete`.
> > 2. It seems a bit inconsistent that the argument is `RecordsToDelete`,
> but
> > the result is just a `Long`. Should the result be `DeleteRecords` or
> > something like that? It could then have a field logStartOffset or
> > lowWatermark instead of having to document it via a comment only.
> >
> > Ismael
> >
> > On Tue, Oct 3, 2017 at 6:51 AM, Paolo Patierno 
> wrote:
> >
> > > Hi all,
> > >
> > > I didn't see any further discussion around this KIP, so I'd like to
> start
> > > the vote for it.
> > >
> > > Just for reference : https://cwiki.apache.org/confl
> > > uence/display/KAFKA/KIP-204+%3A+adding+records+deletion+
> > > operation+to+the+new+Admin+Client+API
> > >
> > >
> > > Thanks,
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #4205: KAFKA-4827: Correctly encode special chars while c...

2017-11-10 Thread wicknicks
GitHub user wicknicks opened a pull request:

https://github.com/apache/kafka/pull/4205

KAFKA-4827: Correctly encode special chars while creating URI objects

Signed-off-by: Arjun Satish 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wicknicks/kafka KAFKA-4827

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4205.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4205


commit 64d40edafb61033a80609f75a09fed987dd9fdfa
Author: Arjun Satish 
Date:   2017-11-10T17:59:42Z

Correctly encode special chars while creating URI objects

Signed-off-by: Arjun Satish 




---


Re: [DISCUSS] Kafka 2.0.0 in June 2018

2017-11-10 Thread Gwen Shapira
Last time we tried deprecating the Scala consumer, there were concerns
about a lack of upgrade path. There is no rolling upgrade, and migrating
offsets is not trivial (and not documented).

Did anything change in that regard? Or are we planning on dropping support
without an upgrade path?


On Fri, Nov 10, 2017 at 5:37 PM Guozhang Wang  wrote:

> Thanks Ismael, the proposal looks good to me.
>
> A side note regarding: https://issues.apache.org/jira/browse/KAFKA-5637,
> could we resolve this ticket sooner than later to make clear about the code
> deprecation and support duration when moving from 1.0.x to 2.0.x?
>
>
> Guozhang
>
>
> On Fri, Nov 10, 2017 at 3:44 AM, Ismael Juma  wrote:
>
> > Features for 2.0.0 will be known after 1.1.0 is released in February
> 2018.
> > We are still doing the usual time-based release process[1].
> >
> > I am raising this well ahead of time because of the potential impact of
> > removing the old Scala clients (particularly the old high-level consumer)
> > and dropping support for Java 7. Hopefully users can then plan
> accordingly.
> > We would do these changes in trunk soon after 1.1.0 is released (around
> > February).
> >
> > I think it makes sense to complete some of the work that was not ready in
> > time for 1.0.0 (Controller improvements and JBOD are two that come to
> mind)
> > in 1.1.0 (January 2018) and combined with the desire to give advance
> > notice, June 2018 was the logical choice.
> >
> > There is no plan to support a particular release for longer. 1.x versus
> 2.x
> > is no different than 0.10.x versus 0.11.x from the perspective of
> > supporting older releases.
> >
> > [1] https://cwiki.apache.org/confluence/display/KAFKA/Time+
> > Based+Release+Plan
> >
> > On Fri, Nov 10, 2017 at 11:21 AM, Jaikiran Pai  >
> > wrote:
> >
> > > Hi Ismael,
> > >
> > > Are there any new features other than the language specific changes
> that
> > > are being planned for 2.0.0? Also, when 2.x gets released, will the 1.x
> > > series see continued bug fixes and releases in the community or is the
> > plan
> > > to have one single main version that gets continuous updates and
> > releases?
> > >
> > > By the way, why June 2018? :)
> > >
> > > -Jaikiran
> > >
> > >
> > >
> > > On 09/11/17 3:14 PM, Ismael Juma wrote:
> > >
> > >> Hi all,
> > >>
> > >> I'm starting this discussion early because of the potential impact.
> > >>
> > >> Kafka 1.0.0 was just released and the focus was on achieving the
> > original
> > >> project vision in terms of features provided while maintaining
> > >> compatibility for the most part (i.e. we did not remove deprecated
> > >> components like the Scala clients).
> > >>
> > >> This was the right decision, in my opinion, but it's time to start
> > >> thinking
> > >> about 2.0.0, which is an opportunity for us to remove major deprecated
> > >> components and to benefit from Java 8 language enhancements (so that
> we
> > >> can
> > >> move faster). So, I propose the following for Kafka 2.0.0:
> > >>
> > >> 1. It should be released in June 2018
> > >> 2. The Scala clients (Consumer, SimpleConsumer, Producer,
> SyncProducer)
> > >> will be removed
> > >> 3. Java 8 or higher will be required, i.e. support for Java 7 will be
> > >> dropped.
> > >>
> > >> Thoughts?
> > >>
> > >> Ismael
> > >>
> > >>
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] Kafka 2.0.0 in June 2018

2017-11-10 Thread Guozhang Wang
Thanks Ismael, the proposal looks good to me.

A side note regarding: https://issues.apache.org/jira/browse/KAFKA-5637,
could we resolve this ticket sooner than later to make clear about the code
deprecation and support duration when moving from 1.0.x to 2.0.x?


Guozhang


On Fri, Nov 10, 2017 at 3:44 AM, Ismael Juma  wrote:

> Features for 2.0.0 will be known after 1.1.0 is released in February 2018.
> We are still doing the usual time-based release process[1].
>
> I am raising this well ahead of time because of the potential impact of
> removing the old Scala clients (particularly the old high-level consumer)
> and dropping support for Java 7. Hopefully users can then plan accordingly.
> We would do these changes in trunk soon after 1.1.0 is released (around
> February).
>
> I think it makes sense to complete some of the work that was not ready in
> time for 1.0.0 (Controller improvements and JBOD are two that come to mind)
> in 1.1.0 (January 2018) and combined with the desire to give advance
> notice, June 2018 was the logical choice.
>
> There is no plan to support a particular release for longer. 1.x versus 2.x
> is no different than 0.10.x versus 0.11.x from the perspective of
> supporting older releases.
>
> [1] https://cwiki.apache.org/confluence/display/KAFKA/Time+
> Based+Release+Plan
>
> On Fri, Nov 10, 2017 at 11:21 AM, Jaikiran Pai 
> wrote:
>
> > Hi Ismael,
> >
> > Are there any new features other than the language specific changes that
> > are being planned for 2.0.0? Also, when 2.x gets released, will the 1.x
> > series see continued bug fixes and releases in the community or is the
> plan
> > to have one single main version that gets continuous updates and
> releases?
> >
> > By the way, why June 2018? :)
> >
> > -Jaikiran
> >
> >
> >
> > On 09/11/17 3:14 PM, Ismael Juma wrote:
> >
> >> Hi all,
> >>
> >> I'm starting this discussion early because of the potential impact.
> >>
> >> Kafka 1.0.0 was just released and the focus was on achieving the
> original
> >> project vision in terms of features provided while maintaining
> >> compatibility for the most part (i.e. we did not remove deprecated
> >> components like the Scala clients).
> >>
> >> This was the right decision, in my opinion, but it's time to start
> >> thinking
> >> about 2.0.0, which is an opportunity for us to remove major deprecated
> >> components and to benefit from Java 8 language enhancements (so that we
> >> can
> >> move faster). So, I propose the following for Kafka 2.0.0:
> >>
> >> 1. It should be released in June 2018
> >> 2. The Scala clients (Consumer, SimpleConsumer, Producer, SyncProducer)
> >> will be removed
> >> 3. Java 8 or higher will be required, i.e. support for Java 7 will be
> >> dropped.
> >>
> >> Thoughts?
> >>
> >> Ismael
> >>
> >>
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #4204: KAFKA-5238: BrokerTopicMetrics can be recreated af...

2017-11-10 Thread edoardocomar
GitHub user edoardocomar opened a pull request:

https://github.com/apache/kafka/pull/4204

KAFKA-5238: BrokerTopicMetrics can be recreated after topic is deleted

Avoiding a DelayedFetch recreate the metrics when a topic has been
deleted

developed with @mimaison

added unit test borrowed from @ijuma JIRA

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edoardocomar/kafka KAFKA-5238

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4204.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4204


commit 4e0cee4c81d8bc85ca35d59b927d797330234aab
Author: Edoardo Comar 
Date:   2017-11-10T17:09:18Z

KAFKA-5238: BrokerTopicMetrics can be recreated after topic is deleted

Avoiding a DelayedFetch recreate the metrics when a topic has been
deleted

developed with @mimaison




---


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-10 Thread Paolo Patierno
Ismael,


yes it makes sense. Taking a look to the other methods in the Admin Client, 
there is no use case returning a "simple" type : most of them are Void or 
complex result represented by classes.

In order to support future extensibility I like your idea.

Let's see what's the others opinions otherwise I'll start to implement in such 
way.


I have updated the KIP and the PR using "recordsToDelete" parameter as well.


Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: isma...@gmail.com  on behalf of Ismael Juma 

Sent: Friday, November 10, 2017 1:15 PM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new 
Admin Client API

Paolo,

You might return the timestamp if the user did a delete by timestamp for
example. But let's maybe hear what others think before we change the KIP.

Ismael

On Fri, Nov 10, 2017 at 12:48 PM, Paolo Patierno  wrote:

> Hi Ismael,
>
>
>   1.  yes it sounds better. Agree.
>   2.  We are using a wrapper class like RecordsToDelete in order to allow
> future operations that won't be based only on specifying an offset. At same
> time with this wrapper class and static methods (i.e. beforeOffset) the API
> is really clear to understand. About moving to use a class for wrapping the
> lowWatermark and not just a Long, do you think that in the future we could
> have some other information to bring as part of the delete operation ? or
> just for being clearer in terms of API (so not just with a comment) ?
>
> Thanks,
> Paolo.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: isma...@gmail.com  on behalf of Ismael Juma <
> ism...@juma.me.uk>
> Sent: Friday, November 10, 2017 12:33 PM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new
> Admin Client API
>
> Hi Paolo,
>
> The KIP looks good, I have a couple of comments:
>
> 1. partitionsAndOffsets could perhaps be `recordsToDelete`.
> 2. It seems a bit inconsistent that the argument is `RecordsToDelete`, but
> the result is just a `Long`. Should the result be `DeleteRecords` or
> something like that? It could then have a field logStartOffset or
> lowWatermark instead of having to document it via a comment only.
>
> Ismael
>
> On Tue, Oct 3, 2017 at 6:51 AM, Paolo Patierno  wrote:
>
> > Hi all,
> >
> > I didn't see any further discussion around this KIP, so I'd like to start
> > the vote for it.
> >
> > Just for reference : https://cwiki.apache.org/confl
> > uence/display/KAFKA/KIP-204+%3A+adding+records+deletion+
> > operation+to+the+new+Admin+Client+API
> >
> >
> > Thanks,
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #2208

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Exclude PULL_REQUEST_TEMPLATE.md from rat checks

[ismael] MINOR: Exclude Committer Checklist section from commit message

--
[...truncated 1.80 MB...]
org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.ResetIntegrationWithSslTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationWithSslTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKTableKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKTableKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKTableKTable 
STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKTableKTable 
PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 

[GitHub] kafka pull request #4203: MINOR: Update version in docs for 0.11.0.2 release

2017-11-10 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4203

MINOR: Update version in docs for 0.11.0.2 release



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka version-update-01102

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4203.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4203


commit b6e467a2455d38c930bb167e227f626ab4506a2e
Author: Rajini Sivaram 
Date:   2017-11-10T16:20:33Z

MINOR: Update version in docs for 0.11.0.2 release




---


Jenkins build is back to normal : kafka-trunk-jdk7 #2968

2017-11-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk9 #191

2017-11-10 Thread Apache Jenkins Server
See 




[GitHub] kafka-site pull request #108: Add Rajini's keys

2017-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/108


---


[GitHub] kafka-site pull request #108: Add Rajini's keys

2017-11-10 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka-site/pull/108

Add Rajini's keys



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka-site add-key

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/108.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #108


commit 240ddda6e00b8c7a4c71a214b010330199c14651
Author: Rajini Sivaram 
Date:   2017-11-10T14:52:49Z

Add Rajini's keys




---


[jira] [Created] (KAFKA-6200) 00000000000000000015.timeindex: The process cannot access the file because it is being used by another process.

2017-11-10 Thread yassine bsf (JIRA)
yassine bsf created KAFKA-6200:
--

 Summary: 0015.timeindex:  The process cannot 
access the file because it is being used by another process.
 Key: KAFKA-6200
 URL: https://issues.apache.org/jira/browse/KAFKA-6200
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: yassine bsf
Priority: Critical


Hello , 

I have a problem installing Kafka on  windows.
I installed  kafka cluster of 3 instances in 3 differents servers (each server 
contain one kafka and zookeeper )
all work well, but when an instance kafka stop (or fail ) , when i try to 
restart this instance .
I have this error message:

{code:java}
[2017-11-10 15:17:53,999] INFO [ThrottledRequestReaper-Produce]: Starting 
(kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2017-11-10 15:17:53,999] INFO [ThrottledRequestReaper-Request]: Starting 
(kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2017-11-10 15:17:54,109] INFO Loading logs. (kafka.log.LogManager)
[2017-11-10 15:17:54,171] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.ArchiveQueue-0\0015.index)
 has non-zero size but the last offset is 15 which is no larger than the base 
offset 15.}. deleting 
C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.ArchiveQueue-0\0015.timeindex,
 
C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.ArchiveQueue-0\0015.index,
 and 
C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.ArchiveQueue-0\0015.txnindex
 and rebuilding index... (kafka.log.Log)
[2017-11-10 15:17:54,171] ERROR There was an error in one of the threads during 
logs loading: java.nio.file.FileSystemException: 
C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.ArchiveQueue-0\0015.timeindex:
 The process cannot access the file because it is being used by another process.
 (kafka.log.LogManager)
[2017-11-10 15:17:54,171] FATAL [Kafka Server 1], Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.FileSystemException: 
C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.ArchiveQueue-0\0015.timeindex:
 The process cannot access the file because it is being used by another process.

at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:318)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:279)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:279)
at kafka.log.Log.loadSegments(Log.scala:383)
at kafka.log.Log.(Log.scala:186)
at kafka.log.Log$.apply(Log.scala:1609)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2017-11-10 15:17:54,187] INFO [Kafka Server 1], shutting down 
(kafka.server.KafkaServer)
[2017-11-10 15:17:54,187] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.JoinInfoQueue-0\.index)
 has non-zero size but the last offset is 0 which is no larger than the base 
offset 0.}. deleting 
C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.JoinInfoQueue-0\.timeindex,
 
C:\Tools\Kafka\kafka-logs\hubone.dataCollect.orbiwise.JoinInfoQueue-0\.index,
 and 

[jira] [Created] (KAFKA-6199) Single broker with fast growing heap usage

2017-11-10 Thread Robin Tweedie (JIRA)
Robin Tweedie created KAFKA-6199:


 Summary: Single broker with fast growing heap usage
 Key: KAFKA-6199
 URL: https://issues.apache.org/jira/browse/KAFKA-6199
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.2.1
 Environment: Amazon Linux
Reporter: Robin Tweedie
 Attachments: Screen Shot 2017-11-10 at 1.55.33 PM.png, Screen Shot 
2017-11-10 at 11.59.06 AM.png

We have a single broker in our cluster of 25 with fast growing heap usage which 
necessitates us restarting it every 12 hours. If we don't restart the broker, 
it becomes very slow from long GC pauses and eventually has {{OutOfMemory}} 
errors.

Here's a graph of heap usage percentage. A "normal" broker in the same cluster 
stays below 50% (averaged) over the same time period.

!Screen Shot 2017-11-10 at 11.59.06 AM.png|thumbnail!

We have taken heap dumps when the broker's heap usage is getting dangerously 
high, and there are a lot of retained {{NetworkSend}} objects referencing byte 
buffers.

We also noticed that the single affected broker logs a lot more of this kind of 
warning than any other broker:
{noformat}
WARN Attempting to send response via channel for which there is no open 
connection, connection id 13 (kafka.network.Processor)
{noformat}

Here are counts of that WARN message visualized across all the brokers (to show 
it happens a bit on other brokers, but not nearly as much as it does on the 
broker):
!Screen Shot 2017-11-10 at 1.55.33 PM.png|thumbnail!

I can't make the heap dumps public, but would appreciate advice on how to pin 
down the problem better. We're currently trying to narrow it down to a 
particular client, but without much success so far.

Let me know what else I could investigate or share to track down the source of 
this leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: 0.11 git tags

2017-11-10 Thread Tom Bentley
That was it, thanks! But I still don't understand how I had tags for 1.0.0,
but not 0.11.0.0.

On 10 November 2017 at 13:10, Kamal  wrote:

> Try, `git fetch upstream --tags`
>
> On Fri, Nov 10, 2017 at 6:32 PM, Mickael Maison 
> wrote:
>
> > Have you tried:
> > git fetch --tags ?
> >
> > On Fri, Nov 10, 2017 at 12:54 PM, Tom Bentley 
> > wrote:
> > > Not when I run `git tag` locally though.
> > >
> > > On 10 November 2017 at 12:44, Ismael Juma  wrote:
> > >
> > >> They seem to be there:
> > >>
> > >> https://github.com/apache/kafka/tree/0.11.0.0
> > >> https://github.com/apache/kafka/tree/0.11.0.1
> > >>
> > >> Ismael
> > >>
> > >> On Fri, Nov 10, 2017 at 12:41 PM, Tom Bentley 
> > >> wrote:
> > >>
> > >> > I just noticed that although every other release is tagged in git
> > there
> > >> are
> > >> > no tags for 0.11.0.0 and 0.11.0.1. Did something go wrong? Please
> can
> > >> they
> > >> > be tagged retrospectively?
> > >> >
> > >> > Cheers,
> > >> >
> > >> > Tom
> > >> >
> > >>
> >
>


Re: [DISCUSS] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-11-10 Thread Steven Aerts
Collin, Ben,

Thanks for the input.

I will work out this proposa, so I get an idea on the impact.

Do you think it is a good idea to line up the new method names with those
of CompletableFuture?

Thanks,


   Steven

Op vr 10 nov. 2017 om 12:12 schreef Ben Stopford :

> Sounds like a good middle ground to me. What do you think Steven?
>
> On Mon, Nov 6, 2017 at 8:18 PM Colin McCabe  wrote:
>
> > It would definitely be nice to use the jdk8 CompletableFuture.  I think
> > that's a bit of a separate discussion, though, since it has such heavy
> > compatibility implications.
> >
> > How about making KIP-218 backwards compatible?  As a starting point, you
> > can change KafkaFuture#BiConsumer to an interface with no compatibility
> > implications, since there are currently no public functions exposed that
> > use it.  That leaves KafkaFuture#Function, which is publicly used now.
> >
> > For the purposes of KIP-218, how about adding a new interface
> > FunctionInterface?  Then you can add a function like this:
> >
> > >  public abstract  KafkaFuture thenApply(FunctionInterface
> > function);
> >
> > And mark the older declaration as deprecated:
> >
> > >  @deprecated
> > >  public abstract  KafkaFuture thenApply(Function function);
> >
> > This is a 100% compatible way to make things nicer for java 8.
> >
> > cheers,
> > Colin
> >
> >
> > On Thu, Nov 2, 2017, at 10:38, Steven Aerts wrote:
> > > Hi Tom,
> > >
> > > Nice observation.
> > > I changed "Rejected Alternatives" section to "Other Alternatives", as
> > > I see myself as too much of an outsider to the kafka community to be
> > > able to decide without this discussion.
> > >
> > > I see two major factors to decide:
> > >  - how soon will KIP-118 (drop support of java 7) be implemented?
> > >  - for which reasons do we drop backwards compatibility for public
> > > interfaces marked as Evolving
> > >
> > > If KIP-118 which is scheduled for version 2.0.0 is going to be
> > > implemented soon, I agree with you that replacing KafkaFuture with
> > > CompletableFuture (or CompletionStage) is a preferable option.
> > > But as I am not familiar with the roadmap it is difficult to tell for
> me.
> > >
> > >
> > > Thanks,
> > >
> > >
> > >Steven
> > >
> > >
> > > 2017-11-02 11:27 GMT+01:00 Tom Bentley :
> > > > Hi Steven,
> > > >
> > > > I notice you've renamed the template's "Rejected Alternatives"
> section
> > to
> > > > "Other Alternatives", suggesting they're not rejected yet (or, if you
> > have
> > > > rejected them, I think you should give your reasons).
> > > >
> > > > Personally, I'd like to understand the arguments against simply
> > replacing
> > > > KafkaFuture with CompletableFuture in Kafka 2.0. In other words, if
> we
> > were
> > > > starting without needing to support Java 7 what would be the
> arguments
> > for
> > > > having our own KafkaFuture?
> > > >
> > > > Thanks,
> > > >
> > > > Tom
> > > >
> > > > On 1 November 2017 at 16:01, Ted Yu  wrote:
> > > >
> > > >> KAFKA-4423 is still open.
> > > >> When would Java 7 be dropped ?
> > > >>
> > > >> Thanks
> > > >>
> > > >> On Wed, Nov 1, 2017 at 8:56 AM, Ismael Juma 
> > wrote:
> > > >>
> > > >> > On Wed, Nov 1, 2017 at 3:51 PM, Ted Yu 
> wrote:
> > > >> >
> > > >> > > bq. Wait for a kafka release which will not support java 7
> anymore
> > > >> > >
> > > >> > > Do you want to raise a separate thread for the above ?
> > > >> > >
> > > >> >
> > > >> > There is already a KIP for this so a separate thread is not
> needed.
> > > >> >
> > > >> > Ismael
> > > >> >
> > > >>
> >
>


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-10 Thread Ismael Juma
Paolo,

You might return the timestamp if the user did a delete by timestamp for
example. But let's maybe hear what others think before we change the KIP.

Ismael

On Fri, Nov 10, 2017 at 12:48 PM, Paolo Patierno  wrote:

> Hi Ismael,
>
>
>   1.  yes it sounds better. Agree.
>   2.  We are using a wrapper class like RecordsToDelete in order to allow
> future operations that won't be based only on specifying an offset. At same
> time with this wrapper class and static methods (i.e. beforeOffset) the API
> is really clear to understand. About moving to use a class for wrapping the
> lowWatermark and not just a Long, do you think that in the future we could
> have some other information to bring as part of the delete operation ? or
> just for being clearer in terms of API (so not just with a comment) ?
>
> Thanks,
> Paolo.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: isma...@gmail.com  on behalf of Ismael Juma <
> ism...@juma.me.uk>
> Sent: Friday, November 10, 2017 12:33 PM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new
> Admin Client API
>
> Hi Paolo,
>
> The KIP looks good, I have a couple of comments:
>
> 1. partitionsAndOffsets could perhaps be `recordsToDelete`.
> 2. It seems a bit inconsistent that the argument is `RecordsToDelete`, but
> the result is just a `Long`. Should the result be `DeleteRecords` or
> something like that? It could then have a field logStartOffset or
> lowWatermark instead of having to document it via a comment only.
>
> Ismael
>
> On Tue, Oct 3, 2017 at 6:51 AM, Paolo Patierno  wrote:
>
> > Hi all,
> >
> > I didn't see any further discussion around this KIP, so I'd like to start
> > the vote for it.
> >
> > Just for reference : https://cwiki.apache.org/confl
> > uence/display/KAFKA/KIP-204+%3A+adding+records+deletion+
> > operation+to+the+new+Admin+Client+API
> >
> >
> > Thanks,
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
>


[jira] [Resolved] (KAFKA-6198) kerberos login fails

2017-11-10 Thread Ronald van de Kuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ronald van de Kuil resolved KAFKA-6198.
---
Resolution: Fixed

> kerberos login fails
> 
>
> Key: KAFKA-6198
> URL: https://issues.apache.org/jira/browse/KAFKA-6198
> Project: Kafka
>  Issue Type: Test
>  Components: clients
>Affects Versions: 0.11.0.1
> Environment: raspberrypi
>Reporter: Ronald van de Kuil
>Priority: Minor
>
> I got very far with setting up kerberos on the raspberry pi as part of self 
> study. 
> I believe that the kafka server is happy with kerberos:
> [2017-11-10 12:17:51,659] INFO Successfully authenticated client: 
> authenticationID=kafka/pi99.dev.ibm@dev.ibm.com; 
> authorizationID=kafka/pi99.dev.ibm@dev.ibm.com. 
> (org.apache.kafka.common.security.authenticator.SaslServerCallbackHandler)
> [2017-11-10 12:17:51,661] INFO Setting authorizedID: kafka 
> (org.apache.kafka.common.security.authenticator.SaslServerCallbackHandler)
> I have setup the kafka.security.auth.SimpleAclAuthorizer
> And granted the following access:
> Current ACLs for resource `Topic:kerberos-topic`: 
>   User:producer has Allow permission for operations: Describe from hosts: 
> *
>   User:producer has Allow permission for operations: Write from hosts: *
>   User:produ...@dev.ibm.com has Allow permission for operations: Describe 
> from hosts: *
>   User:produ...@dev.ibm.com has Allow permission for operations: Write 
> from hosts: * 
> When I start the client, then I see it getting the kerberos ticket:
> [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - 
> Successfully logged in.
> [kafka-kerberos-refresh-thread-produ...@dev.ibm.com] INFO 
> org.apache.kafka.common.security.kerberos.KerberosLogin - 
> [Principal=produ...@dev.ibm.com]: TGT refresh thread started.
> [kafka-kerberos-refresh-thread-produ...@dev.ibm.com] INFO 
> org.apache.kafka.common.security.kerberos.KerberosLogin - 
> [Principal=produ...@dev.ibm.com]: TGT valid starting at: Fri Nov 10 12:50:11 
> CET 2017
> [kafka-kerberos-refresh-thread-produ...@dev.ibm.com] INFO 
> org.apache.kafka.common.security.kerberos.KerberosLogin - 
> [Principal=produ...@dev.ibm.com]: TGT expires: Fri Nov 10 22:50:11 CET 2017
> [kafka-kerberos-refresh-thread-produ...@dev.ibm.com] INFO 
> org.apache.kafka.common.security.kerberos.KerberosLogin - 
> [Principal=produ...@dev.ibm.com]: TGT refresh sleeping until: Fri Nov 10 
> 21:13:37 CET 2017
> But the client fails to login:
> [kafka-producer-network-thread | producer-1] WARN 
> org.apache.kafka.clients.NetworkClient - Connection to node -1 terminated 
> during authentication. This may indicate that authentication failed due to 
> invalid credentials.
> I do not see any warnings in the logs, so I do not have much to go on.
> What can I do to get my finger behind this issue?
> Thank you,
> Ronald - the NOOB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: 0.11 git tags

2017-11-10 Thread Kamal
Try, `git fetch upstream --tags`

On Fri, Nov 10, 2017 at 6:32 PM, Mickael Maison 
wrote:

> Have you tried:
> git fetch --tags ?
>
> On Fri, Nov 10, 2017 at 12:54 PM, Tom Bentley 
> wrote:
> > Not when I run `git tag` locally though.
> >
> > On 10 November 2017 at 12:44, Ismael Juma  wrote:
> >
> >> They seem to be there:
> >>
> >> https://github.com/apache/kafka/tree/0.11.0.0
> >> https://github.com/apache/kafka/tree/0.11.0.1
> >>
> >> Ismael
> >>
> >> On Fri, Nov 10, 2017 at 12:41 PM, Tom Bentley 
> >> wrote:
> >>
> >> > I just noticed that although every other release is tagged in git
> there
> >> are
> >> > no tags for 0.11.0.0 and 0.11.0.1. Did something go wrong? Please can
> >> they
> >> > be tagged retrospectively?
> >> >
> >> > Cheers,
> >> >
> >> > Tom
> >> >
> >>
>


Re: 0.11 git tags

2017-11-10 Thread Mickael Maison
Have you tried:
git fetch --tags ?

On Fri, Nov 10, 2017 at 12:54 PM, Tom Bentley  wrote:
> Not when I run `git tag` locally though.
>
> On 10 November 2017 at 12:44, Ismael Juma  wrote:
>
>> They seem to be there:
>>
>> https://github.com/apache/kafka/tree/0.11.0.0
>> https://github.com/apache/kafka/tree/0.11.0.1
>>
>> Ismael
>>
>> On Fri, Nov 10, 2017 at 12:41 PM, Tom Bentley 
>> wrote:
>>
>> > I just noticed that although every other release is tagged in git there
>> are
>> > no tags for 0.11.0.0 and 0.11.0.1. Did something go wrong? Please can
>> they
>> > be tagged retrospectively?
>> >
>> > Cheers,
>> >
>> > Tom
>> >
>>


Re: 0.11 git tags

2017-11-10 Thread Tom Bentley
Not when I run `git tag` locally though.

On 10 November 2017 at 12:44, Ismael Juma  wrote:

> They seem to be there:
>
> https://github.com/apache/kafka/tree/0.11.0.0
> https://github.com/apache/kafka/tree/0.11.0.1
>
> Ismael
>
> On Fri, Nov 10, 2017 at 12:41 PM, Tom Bentley 
> wrote:
>
> > I just noticed that although every other release is tagged in git there
> are
> > no tags for 0.11.0.0 and 0.11.0.1. Did something go wrong? Please can
> they
> > be tagged retrospectively?
> >
> > Cheers,
> >
> > Tom
> >
>


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-10 Thread Paolo Patierno
Hi Ismael,


  1.  yes it sounds better. Agree.
  2.  We are using a wrapper class like RecordsToDelete in order to allow 
future operations that won't be based only on specifying an offset. At same 
time with this wrapper class and static methods (i.e. beforeOffset) the API is 
really clear to understand. About moving to use a class for wrapping the 
lowWatermark and not just a Long, do you think that in the future we could have 
some other information to bring as part of the delete operation ? or just for 
being clearer in terms of API (so not just with a comment) ?

Thanks,
Paolo.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: isma...@gmail.com  on behalf of Ismael Juma 

Sent: Friday, November 10, 2017 12:33 PM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new 
Admin Client API

Hi Paolo,

The KIP looks good, I have a couple of comments:

1. partitionsAndOffsets could perhaps be `recordsToDelete`.
2. It seems a bit inconsistent that the argument is `RecordsToDelete`, but
the result is just a `Long`. Should the result be `DeleteRecords` or
something like that? It could then have a field logStartOffset or
lowWatermark instead of having to document it via a comment only.

Ismael

On Tue, Oct 3, 2017 at 6:51 AM, Paolo Patierno  wrote:

> Hi all,
>
> I didn't see any further discussion around this KIP, so I'd like to start
> the vote for it.
>
> Just for reference : https://cwiki.apache.org/confl
> uence/display/KAFKA/KIP-204+%3A+adding+records+deletion+
> operation+to+the+new+Admin+Client+API
>
>
> Thanks,
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Re: 0.11 git tags

2017-11-10 Thread Ismael Juma
They seem to be there:

https://github.com/apache/kafka/tree/0.11.0.0
https://github.com/apache/kafka/tree/0.11.0.1

Ismael

On Fri, Nov 10, 2017 at 12:41 PM, Tom Bentley  wrote:

> I just noticed that although every other release is tagged in git there are
> no tags for 0.11.0.0 and 0.11.0.1. Did something go wrong? Please can they
> be tagged retrospectively?
>
> Cheers,
>
> Tom
>


[GitHub] kafka pull request #4202: MINOR: Exclude Committer Checklist section from co...

2017-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4202


---


0.11 git tags

2017-11-10 Thread Tom Bentley
I just noticed that although every other release is tagged in git there are
no tags for 0.11.0.0 and 0.11.0.1. Did something go wrong? Please can they
be tagged retrospectively?

Cheers,

Tom


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-10 Thread Ismael Juma
Hi Paolo,

The KIP looks good, I have a couple of comments:

1. partitionsAndOffsets could perhaps be `recordsToDelete`.
2. It seems a bit inconsistent that the argument is `RecordsToDelete`, but
the result is just a `Long`. Should the result be `DeleteRecords` or
something like that? It could then have a field logStartOffset or
lowWatermark instead of having to document it via a comment only.

Ismael

On Tue, Oct 3, 2017 at 6:51 AM, Paolo Patierno  wrote:

> Hi all,
>
> I didn't see any further discussion around this KIP, so I'd like to start
> the vote for it.
>
> Just for reference : https://cwiki.apache.org/confl
> uence/display/KAFKA/KIP-204+%3A+adding+records+deletion+
> operation+to+the+new+Admin+Client+API
>
>
> Thanks,
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-10 Thread Damian Guy
+1 (binding) - thanks

On Tue, 7 Nov 2017 at 15:14 Paolo Patierno  wrote:

> Because Guozhang and Colin are doing a great job on reviewing the related
> PR and we are really close to have it in a decent/final shape, what do the
> other committers think about this KIP ?
>
> We are stuck at 2 binding votes (and something like 5 non binding). We
> need last binding vote for having the KIP got accepted and we can start to
> think about merging the PR for a future 1.1.0 release.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Kamal 
> Sent: Monday, November 6, 2017 1:26 AM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new
> Admin Client API
>
> +1 (non-binding)
>
> On Thu, Nov 2, 2017 at 2:13 PM, Paolo Patierno  wrote:
>
> > Thanks Jason !
> >
> >
> > I have just updated the KIP with DeleteRecordsOptions definition.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Jason Gustafson 
> > Sent: Wednesday, November 1, 2017 10:09 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the
> new
> > Admin Client API
> >
> > +1 (binding). Just one nit: the KIP doesn't define the
> DeleteRecordsOptions
> > object. I see it's empty in the PR, but we may as well document the full
> > API in the KIP.
> >
> > -Jason
> >
> >
> > On Wed, Nov 1, 2017 at 2:54 PM, Guozhang Wang 
> wrote:
> >
> > > Made a pass over the PR and left some comments. I'm +1 on the wiki
> design
> > > page as well.
> > >
> > > On Tue, Oct 31, 2017 at 7:13 AM, Bill Bejeck 
> wrote:
> > >
> > > > +1
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > > On Tue, Oct 31, 2017 at 4:36 AM, Paolo Patierno 
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > >
> > > > > because I don't see any further discussion around KIP-204 (
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 204+%3A+adding+records+deletion+operation+to+the+new+
> > Admin+Client+API)
> > > > > and I have already opened a PR with the implementation, can we
> > re-cover
> > > > the
> > > > > vote started on October 18 ?
> > > > >
> > > > > There are only "non binding" votes up to now.
> > > > >
> > > > > Thanks,
> > > > >
> > > > >
> > > > > Paolo Patierno
> > > > > Senior Software Engineer (IoT) @ Red Hat
> > > > > Microsoft MVP on Azure & IoT
> > > > > Microsoft Azure Advisor
> > > > >
> > > > > Twitter : @ppatierno
> > > > > Linkedin : paolopatierno
> > > > > Blog : DevExperience
> > > > >
> > > > >
> > > > > 
> > > > > From: Viktor Somogyi 
> > > > > Sent: Wednesday, October 18, 2017 10:49 AM
> > > > > To: dev@kafka.apache.org
> > > > > Subject: Re: [VOTE] KIP-204 : adding records deletion operation to
> > the
> > > > new
> > > > > Admin Client API
> > > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > On Wed, Oct 18, 2017 at 8:23 AM, Manikumar <
> > manikumar.re...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > + (non-binding)
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Manikumar
> > > > > >
> > > > > > On Tue, Oct 17, 2017 at 7:42 AM, Dong Lin 
> > > wrote:
> > > > > >
> > > > > > > Thanks for the KIP. +1 (non-binding)
> > > > > > >
> > > > > > > On Wed, Oct 11, 2017 at 2:27 AM, Ted Yu 
> > > wrote:
> > > > > > >
> > > > > > > > +1
> > > > > > > >
> > > > > > > > On Mon, Oct 2, 2017 at 10:51 PM, Paolo Patierno <
> > > > ppatie...@live.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > I didn't see any further discussion around this KIP, so I'd
> > > like
> > > > to
> > > > > > > start
> > > > > > > > > the vote for it.
> > > > > > > > >
> > > > > > > > > Just for reference : https://cwiki.apache.org/
> > > > > > > > > confluence/display/KAFKA/KIP-204+%3A+adding+records+
> > > > > > > > > deletion+operation+to+the+new+Admin+Client+API
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > >
> > > > > > > > > Paolo Patierno
> > > > > > > > > Senior Software Engineer (IoT) @ Red Hat

[jira] [Created] (KAFKA-6198) kerberos login fails

2017-11-10 Thread Ronald van de Kuil (JIRA)
Ronald van de Kuil created KAFKA-6198:
-

 Summary: kerberos login fails
 Key: KAFKA-6198
 URL: https://issues.apache.org/jira/browse/KAFKA-6198
 Project: Kafka
  Issue Type: Test
  Components: clients
Affects Versions: 0.11.0.1
 Environment: raspberrypi
Reporter: Ronald van de Kuil
Priority: Minor


I got very far with setting up kerberos on the raspberry pi as part of self 
study. 

I believe that the kafka server is happy with kerberos:

[2017-11-10 12:17:51,659] INFO Successfully authenticated client: 
authenticationID=kafka/pi99.dev.ibm@dev.ibm.com; 
authorizationID=kafka/pi99.dev.ibm@dev.ibm.com. 
(org.apache.kafka.common.security.authenticator.SaslServerCallbackHandler)
[2017-11-10 12:17:51,661] INFO Setting authorizedID: kafka 
(org.apache.kafka.common.security.authenticator.SaslServerCallbackHandler)

I have setup the kafka.security.auth.SimpleAclAuthorizer

And granted the following access:

Current ACLs for resource `Topic:kerberos-topic`: 
User:producer has Allow permission for operations: Describe from hosts: 
*
User:producer has Allow permission for operations: Write from hosts: *
User:produ...@dev.ibm.com has Allow permission for operations: Describe 
from hosts: *
User:produ...@dev.ibm.com has Allow permission for operations: Write 
from hosts: * 

When I start the client, then I see it getting the kerberos ticket:

[main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - 
Successfully logged in.
[kafka-kerberos-refresh-thread-produ...@dev.ibm.com] INFO 
org.apache.kafka.common.security.kerberos.KerberosLogin - 
[Principal=produ...@dev.ibm.com]: TGT refresh thread started.
[kafka-kerberos-refresh-thread-produ...@dev.ibm.com] INFO 
org.apache.kafka.common.security.kerberos.KerberosLogin - 
[Principal=produ...@dev.ibm.com]: TGT valid starting at: Fri Nov 10 12:50:11 
CET 2017
[kafka-kerberos-refresh-thread-produ...@dev.ibm.com] INFO 
org.apache.kafka.common.security.kerberos.KerberosLogin - 
[Principal=produ...@dev.ibm.com]: TGT expires: Fri Nov 10 22:50:11 CET 2017
[kafka-kerberos-refresh-thread-produ...@dev.ibm.com] INFO 
org.apache.kafka.common.security.kerberos.KerberosLogin - 
[Principal=produ...@dev.ibm.com]: TGT refresh sleeping until: Fri Nov 10 
21:13:37 CET 2017

But the client fails to login:

[kafka-producer-network-thread | producer-1] WARN 
org.apache.kafka.clients.NetworkClient - Connection to node -1 terminated 
during authentication. This may indicate that authentication failed due to 
invalid credentials.

I do not see any warnings in the logs, so I do not have much to go on.

What can I do to get my finger behind this issue?

Thank you,

Ronald - the NOOB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4202: MINOR: Exclude Committer Checklist section from co...

2017-11-10 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4202

MINOR: Exclude Committer Checklist section from commit message

### Committer Checklist
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
exclude-committer-checklist-when-merging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4202.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4202


commit 23605d55643d1e517944bebe0aaaf42e6b602322
Author: Ismael Juma 
Date:   2017-11-10T11:50:48Z

MINOR: Exclude Committer Checklist section from commit message




---


Build failed in Jenkins: kafka-trunk-jdk8 #2207

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Use Scala Future in CoreUtils test

--
[...truncated 381.89 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

Re: [DISCUSS] Kafka 2.0.0 in June 2018

2017-11-10 Thread Ismael Juma
Features for 2.0.0 will be known after 1.1.0 is released in February 2018.
We are still doing the usual time-based release process[1].

I am raising this well ahead of time because of the potential impact of
removing the old Scala clients (particularly the old high-level consumer)
and dropping support for Java 7. Hopefully users can then plan accordingly.
We would do these changes in trunk soon after 1.1.0 is released (around
February).

I think it makes sense to complete some of the work that was not ready in
time for 1.0.0 (Controller improvements and JBOD are two that come to mind)
in 1.1.0 (January 2018) and combined with the desire to give advance
notice, June 2018 was the logical choice.

There is no plan to support a particular release for longer. 1.x versus 2.x
is no different than 0.10.x versus 0.11.x from the perspective of
supporting older releases.

[1] https://cwiki.apache.org/confluence/display/KAFKA/Time+
Based+Release+Plan

On Fri, Nov 10, 2017 at 11:21 AM, Jaikiran Pai 
wrote:

> Hi Ismael,
>
> Are there any new features other than the language specific changes that
> are being planned for 2.0.0? Also, when 2.x gets released, will the 1.x
> series see continued bug fixes and releases in the community or is the plan
> to have one single main version that gets continuous updates and releases?
>
> By the way, why June 2018? :)
>
> -Jaikiran
>
>
>
> On 09/11/17 3:14 PM, Ismael Juma wrote:
>
>> Hi all,
>>
>> I'm starting this discussion early because of the potential impact.
>>
>> Kafka 1.0.0 was just released and the focus was on achieving the original
>> project vision in terms of features provided while maintaining
>> compatibility for the most part (i.e. we did not remove deprecated
>> components like the Scala clients).
>>
>> This was the right decision, in my opinion, but it's time to start
>> thinking
>> about 2.0.0, which is an opportunity for us to remove major deprecated
>> components and to benefit from Java 8 language enhancements (so that we
>> can
>> move faster). So, I propose the following for Kafka 2.0.0:
>>
>> 1. It should be released in June 2018
>> 2. The Scala clients (Consumer, SimpleConsumer, Producer, SyncProducer)
>> will be removed
>> 3. Java 8 or higher will be required, i.e. support for Java 7 will be
>> dropped.
>>
>> Thoughts?
>>
>> Ismael
>>
>>
>


Build failed in Jenkins: kafka-trunk-jdk7 #2967

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Use Scala Future in CoreUtils test

--
[...truncated 1.84 MB...]
org.apache.kafka.streams.KafkaStreamsTest > testStateGlobalThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics STARTED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce 

Re: [DISCUSS] Kafka 2.0.0 in June 2018

2017-11-10 Thread Jaikiran Pai

Hi Ismael,

Are there any new features other than the language specific changes that 
are being planned for 2.0.0? Also, when 2.x gets released, will the 1.x 
series see continued bug fixes and releases in the community or is the 
plan to have one single main version that gets continuous updates and 
releases?


By the way, why June 2018? :)

-Jaikiran


On 09/11/17 3:14 PM, Ismael Juma wrote:

Hi all,

I'm starting this discussion early because of the potential impact.

Kafka 1.0.0 was just released and the focus was on achieving the original
project vision in terms of features provided while maintaining
compatibility for the most part (i.e. we did not remove deprecated
components like the Scala clients).

This was the right decision, in my opinion, but it's time to start thinking
about 2.0.0, which is an opportunity for us to remove major deprecated
components and to benefit from Java 8 language enhancements (so that we can
move faster). So, I propose the following for Kafka 2.0.0:

1. It should be released in June 2018
2. The Scala clients (Consumer, SimpleConsumer, Producer, SyncProducer)
will be removed
3. Java 8 or higher will be required, i.e. support for Java 7 will be
dropped.

Thoughts?

Ismael





Re: [DISCUSS] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-11-10 Thread Ben Stopford
Sounds like a good middle ground to me. What do you think Steven?

On Mon, Nov 6, 2017 at 8:18 PM Colin McCabe  wrote:

> It would definitely be nice to use the jdk8 CompletableFuture.  I think
> that's a bit of a separate discussion, though, since it has such heavy
> compatibility implications.
>
> How about making KIP-218 backwards compatible?  As a starting point, you
> can change KafkaFuture#BiConsumer to an interface with no compatibility
> implications, since there are currently no public functions exposed that
> use it.  That leaves KafkaFuture#Function, which is publicly used now.
>
> For the purposes of KIP-218, how about adding a new interface
> FunctionInterface?  Then you can add a function like this:
>
> >  public abstract  KafkaFuture thenApply(FunctionInterface
> function);
>
> And mark the older declaration as deprecated:
>
> >  @deprecated
> >  public abstract  KafkaFuture thenApply(Function function);
>
> This is a 100% compatible way to make things nicer for java 8.
>
> cheers,
> Colin
>
>
> On Thu, Nov 2, 2017, at 10:38, Steven Aerts wrote:
> > Hi Tom,
> >
> > Nice observation.
> > I changed "Rejected Alternatives" section to "Other Alternatives", as
> > I see myself as too much of an outsider to the kafka community to be
> > able to decide without this discussion.
> >
> > I see two major factors to decide:
> >  - how soon will KIP-118 (drop support of java 7) be implemented?
> >  - for which reasons do we drop backwards compatibility for public
> > interfaces marked as Evolving
> >
> > If KIP-118 which is scheduled for version 2.0.0 is going to be
> > implemented soon, I agree with you that replacing KafkaFuture with
> > CompletableFuture (or CompletionStage) is a preferable option.
> > But as I am not familiar with the roadmap it is difficult to tell for me.
> >
> >
> > Thanks,
> >
> >
> >Steven
> >
> >
> > 2017-11-02 11:27 GMT+01:00 Tom Bentley :
> > > Hi Steven,
> > >
> > > I notice you've renamed the template's "Rejected Alternatives" section
> to
> > > "Other Alternatives", suggesting they're not rejected yet (or, if you
> have
> > > rejected them, I think you should give your reasons).
> > >
> > > Personally, I'd like to understand the arguments against simply
> replacing
> > > KafkaFuture with CompletableFuture in Kafka 2.0. In other words, if we
> were
> > > starting without needing to support Java 7 what would be the arguments
> for
> > > having our own KafkaFuture?
> > >
> > > Thanks,
> > >
> > > Tom
> > >
> > > On 1 November 2017 at 16:01, Ted Yu  wrote:
> > >
> > >> KAFKA-4423 is still open.
> > >> When would Java 7 be dropped ?
> > >>
> > >> Thanks
> > >>
> > >> On Wed, Nov 1, 2017 at 8:56 AM, Ismael Juma 
> wrote:
> > >>
> > >> > On Wed, Nov 1, 2017 at 3:51 PM, Ted Yu  wrote:
> > >> >
> > >> > > bq. Wait for a kafka release which will not support java 7 anymore
> > >> > >
> > >> > > Do you want to raise a separate thread for the above ?
> > >> > >
> > >> >
> > >> > There is already a KIP for this so a separate thread is not needed.
> > >> >
> > >> > Ismael
> > >> >
> > >>
>


[GitHub] kafka pull request #4201: MINOR: Exclude PULL_REQUEST_TEMPLATE.md from rat c...

2017-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4201


---


Build failed in Jenkins: kafka-trunk-jdk8 #2206

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Add pull request template

--
[...truncated 383.91 KB...]

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > 

[GitHub] kafka pull request #4201: MINOR: Exclude PULL_REQUEST_TEMPLATE.md from rat c...

2017-11-10 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4201

MINOR: Exclude PULL_REQUEST_TEMPLATE.md from rat checks

### Committer Checklist
- [ ] Verify design and implementation 
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
exclude-pull-request-template-from-rat

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4201.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4201


commit 55da17602c3fdd9198c509652c6d709528e2182d
Author: Ismael Juma 
Date:   2017-11-10T10:23:35Z

MINOR: Exclude PULL_REQUEST_TEMPLATE.md from rat checks




---


Build failed in Jenkins: kafka-trunk-jdk9 #190

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Use Scala Future in CoreUtils test

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision b9039bbd4b3d8eb231c9f2d1d40042706f8f4f19 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b9039bbd4b3d8eb231c9f2d1d40042706f8f4f19
Commit message: "MINOR: Use Scala Future in CoreUtils test"
 > git rev-list 14e3ed048c96bc441f5fc5c893833f705fd53241 # timeout=10
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins4394690524172699163.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/bin/gradle
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.gradle.internal.reflect.JavaMethod 
(file:/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/lib/gradle-base-services-3.4.jar)
 to method java.lang.ClassLoader.getPackages()
WARNING: Please consider reporting this to the maintainers of 
org.gradle.internal.reflect.JavaMethod
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/3.4-rc-2/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
:downloadWrapper

BUILD SUCCESSFUL

Total time: 16.586 secs
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins6864656176440027719.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean test -PscalaVersion=2.12
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.2.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.12.4
:clean
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:jmh-benchmarks:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:connect:transforms:clean
:streams:examples:clean
:compileJava NO-SOURCE
:processResources NO-SOURCE
:classes UP-TO-DATE
:rat
Unknown license: 

:rat FAILED

FAILURE: Build failed with an exception.

* Where:
Script ' 
line: 63

* What went wrong:
Execution failed for task ':rat'.
> Found 1 files with unknown licenses.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

* Get more help at https://help.gradle.org

BUILD FAILED in 18s
16 actionable tasks: 15 executed, 1 up-to-date
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2

Re: [DISCUSS] KIP-212: Enforce set of legal characters for connector names

2017-11-10 Thread Sönke Liebau
Hi Randall,

I have set aside some time to work on this next week. The fix itself is
quite simple, but I've yet to write tests to properly catch this, which
turns out to be a bit more complex, as it needs a running restserver which
is mocked in the tests I've looked at so far.

Should I withdraw the KIP or update it to reflect the documentation changes
and enforced rules around trimming and zero length connector names? This is
a change to existing behavior, even if it is quite small and probably won't
even be noticed by many people..

best regards,
Sönke

On Thu, Nov 9, 2017 at 9:10 PM, Randall Hauch  wrote:

> Any progress on updating the PR and withdrawing KIP-212?
>
> On Fri, Oct 27, 2017 at 5:19 PM, Randall Hauch  wrote:
>
> > Yes, connector names should not be blank or contain just whitespace. In
> > fact, I might recommend that we trim whitespace at the front and rear of
> > new connector names and then disallowing any zero-length name. Existing
> > connectors would remain valid, and this would not break backward
> > compatibility. That might require a small kip simply to update the
> > documentation and specify what names are valid.
> >
> > WDYT?
> >
> > Randall
> >
> > On Fri, Oct 27, 2017 at 1:08 PM, Colin McCabe 
> wrote:
> >
> >> On Wed, Oct 25, 2017, at 01:07, Sönke Liebau wrote:
> >> > I've spent some time looking at this and testing various characters
> and
> >> > it
> >> > would appear that Randall's suspicion was spot on. I think we can
> >> support
> >> > a
> >> > fairly large set of characters with very minor changes.
> >> >
> >> > I was put of by the exceptions that were thrown when creating
> connectors
> >> > with certain characters and suspected a larger underlying problem when
> >> in
> >> > fact the only issue is, that the URL in the rest request used to
> >> retrieve
> >> > the response for the create connector request needs to be percent
> >> encoded
> >> > [1].
> >> >
> >> > I've fixed this and done some local testing which worked out quite
> >> > nicely,
> >> > apart from two special cases, I've not been able to find characters
> that
> >> > created issues, even space and slash work.
> >> > The mentioned special cases are:
> >> >   \  - if the name contains a backslash that is not the beginning of a
> >> > valid escape sequence the request fails before we ever get it in
> >> > ConnectorsResource, so a backslash would need to be escaped: \\
> >> >   "  - Quotation marks need to be escaped as well to keep the json
> body
> >> >   of
> >> > the request legal: \"
> >> > In both cases the escape character will be part of the connector name
> >> and
> >> > need to be specified in the url to retrieve the connector as well,
> even
> >> > though we could URL encode it in a legal way without escaping here. So
> >> > they
> >> > work, not sure if I'd recommend using those characters, but no real
> >> > reason
> >> > to prohibit people from using them that I can see either.
> >>
> >> Good research, Sönke.
> >>
> >> >
> >> >
> >> > What I'd do going forward is:
> >> > - withdraw the KIP, as I don't see a real need for one, since this is
> >> not
> >> > changing anything, just fixing things.
> >> > - add a section to the documentation around legal characters, specify
> >> the
> >> > ones I tested explicitly (url encoded %20 - %7F) and mention that most
> >> > other characters should work as well but no guarantees are given
> >> > - update the pull request for KAFKA-4930 to allow all characters but
> >> > still
> >> > prohibit creating a connector with an empty name. I'd propose to keep
> >> the
> >> > validator though as it'll give us a central location to do any
> checking
> >> > that might turn out to be necessary later on.
> >>
> >> Are empty names currently allowed?  That's unfortunate.
> >>
> >> > - add some integration tests to check connectors with special
> characters
> >> > in
> >> > their names work
> >> > - fix the url encoding line in ConnectorsResource
> >> >
> >> > Does that sound fair to everybody?
> >>
> >> It sounds good to me, but I will let someone more knowledgeable about
> >> connect chime in.
> >>
> >> best,
> >> Colin
> >>
> >> >
> >> > Kind regards,
> >> > Sönke
> >> >
> >> > [1]
> >> > https://github.com/apache/kafka/blob/trunk/connect/runtime/
> >> src/main/java/org/apache/kafka/connect/runtime/rest/
> >> resources/ConnectorsResource.java#L102
> >> >
> >> > On Tue, Oct 24, 2017 at 8:40 PM, Colin McCabe 
> >> wrote:
> >> >
> >> > > On Tue, Oct 24, 2017, at 11:28, Sönke Liebau wrote:
> >> > > > Hi,
> >> > > >
> >> > > > after reading your messages I'll grant that I might have picked a
> >> > > > somewhat
> >> > > > draconic option to solve these issues.
> >> > > >
> >> > > > In general I believe that properly encoding the URLs after having
> >> created
> >> > > > the connectors should solve a lot of the issues already. For some
> >> > > > characters the rest api returns an error on creating 

Re: [DISCUSS] 0.11.0.2 bug fix release

2017-11-10 Thread Rajini Sivaram
Update on 0.11.0.2 release:

I moved some of the JIRAs out of this release. If anyone thinks they should
be included, please let me know.

There are still a couple left (and a 3rd one which is a duplicate). They
are not blockers, so I will move them out of 0.11.0.2 if there are no
objections.

   - https://issues.apache.org/jira/browse/KAFKA-4827
   - https://issues.apache.org/jira/browse/KAFKA-5936

The plan is to build the first RC later today. So please let me know if you
think there are any blockers.

Thank you!

Rajini





On Mon, Nov 6, 2017 at 9:47 PM, Rajini Sivaram 
wrote:

> There is currently no one assigned to https://issues.apache.org/
> jira/browse/KAFKA-4972 and https://issues.apache.org/
> jira/browse/KAFKA-3955. I am not sure either of these has an easy fix. So
> unless some one picks them up soon, they may need to get moved to the next
> release.
>
> On Mon, Nov 6, 2017 at 9:29 PM, Rajini Sivaram 
> wrote:
>
>> Thanks, Guozhang. I will move  KAFKA-1923 out of 0.11.0.2.
>>
>> On Mon, Nov 6, 2017 at 5:36 PM, Guozhang Wang  wrote:
>>
>>> Thanks Rajini, +1 on releasing a 0.11.0.2.
>>>
>>> I made a pass over the outstanding issues, and added another related
>>> issue (
>>> https://issues.apache.org/jira/browse/KAFKA-4767) to KAFKA-5936 to the
>>> list.
>>>
>>> For the other outstanding ones that do not yet have assigned to anyone
>>> yet,
>>> we'd need to start working on them asap if we want to include their fixes
>>> by Friday. Among them, only KAFKA-1923 seems not straight forward, so my
>>> suggestion is to move that issue out of the scope while driving to finish
>>> others sooner.
>>>
>>> Guozhang
>>>
>>>
>>> On Mon, Nov 6, 2017 at 2:47 AM, Rajini Sivaram 
>>> wrote:
>>>
>>> > Hi all,
>>> >
>>> >
>>> > Since we have fixed some critical issues since 0.11.0.1, it must be
>>> time
>>> > for a 0.11.0.2 release.
>>> >
>>> >
>>> > Since the 0.11.0.1 release we've fixed 14 JIRAs that are targeted for
>>> > 0.11.0.2:
>>> >
>>> > https://issues.apache.org/jira/browse/KAFKA-6134?jql=
>>> > project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%
>>> > 20Closed)%20AND%20fixVersion%20%3D%200.11.0.2
>>> >
>>> >
>>> >
>>> > We have 11 outstanding issues that are targeted for 0.11.0.2:
>>> >
>>> > https://issues.apache.org/jira/browse/KAFKA-6007?jql=
>>> > project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%
>>> > 22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%
>>> > 20fixVersion%20%3D%200.11.0.2
>>> >
>>> >
>>> > Can the owners of these issues please resolve them soon or move them
>>> to a
>>> > future release?
>>> >
>>> >
>>> > I will aim to create the first RC for 0.11.0.2 this Friday.
>>> >
>>> > Thank you!
>>> >
>>> > Regards,
>>> >
>>> > Rajini
>>> >
>>>
>>>
>>>
>>> --
>>> -- Guozhang
>>>
>>
>>
>


[GitHub] kafka pull request #4142: MINOR: Use Scala Future in CoreUtils test

2017-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4142


---


[GitHub] kafka pull request #551: KAFKA-2858; Introduce `SimplePrincipal` and use it ...

2017-11-10 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/551


---


Build failed in Jenkins: kafka-trunk-jdk7 #2966

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Add pull request template

--
[...truncated 382.50 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #2205

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: GlobalStateManagerImpl in trunk has renamed the consumer 
field

--
[...truncated 3.32 MB...]
kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED


Jenkins build is back to normal : kafka-trunk-jdk7 #2965

2017-11-10 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk9 #189

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Add pull request template

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H18 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 14e3ed048c96bc441f5fc5c893833f705fd53241 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 14e3ed048c96bc441f5fc5c893833f705fd53241
Commit message: "MINOR: Add pull request template"
 > git rev-list 84ddff6792a2082a1940664d42f5b03edc617c31 # timeout=10
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins8809419930288820225.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/bin/gradle
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.gradle.internal.reflect.JavaMethod 
(file:/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/lib/gradle-base-services-3.4.jar)
 to method java.lang.ClassLoader.getPackages()
WARNING: Please consider reporting this to the maintainers of 
org.gradle.internal.reflect.JavaMethod
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/3.4-rc-2/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
Cleaned up directory 
'
Cleaned up directory 
'
:downloadWrapper

BUILD SUCCESSFUL

Total time: 10.336 secs
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins1976768694206036862.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean test -PscalaVersion=2.12
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.2.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.12.4
:clean
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:transforms:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:compileJava NO-SOURCE
:processResources NO-SOURCE
:classes UP-TO-DATE
:rat
Unknown license: 

:rat FAILED

FAILURE: Build failed with an exception.

* Where:
Script ' 
line: 63

* What went wrong:
Execution failed for task ':rat'.
> Found 1 files with unknown licenses.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

* Get more help at https://help.gradle.org

BUILD FAILED in 12s
16 actionable tasks: 4 executed, 12 up-to-date
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user 

[GitHub] kafka pull request #4174: MINOR: Add pull request template

2017-11-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4174


---


Build failed in Jenkins: kafka-trunk-jdk9 #188

2017-11-10 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: GlobalStateManagerImpl in trunk has renamed the consumer 
field

--
[...truncated 1.41 MB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED