Re: Kafka official docker image

2019-01-15 Thread Stevo Slavić
FYI
https://docs.confluent.io/current/installation/docker/docs/image-reference.html

On Wed, Jan 16, 2019 at 12:06 AM Олег Иванов  wrote:

> Hi,
>
> Could you please create an official docker image of kafka? There are a lot
> custom images in the dockerhub, but company's security policy allows only
> official images.
>
> Thanks!
>


Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-24 Thread Stevo Slavić
+1 (non-binding)

On Tue, Apr 24, 2018 at 5:29 PM, Matthias J. Sax 
wrote:

> +1
>
> On 4/24/18 4:21 PM, Jason Gustafson wrote:
> > +1 Thanks Ismael.
> >
> > On Tue, Apr 24, 2018 at 5:01 AM, Thomas Crayford <
> tcrayf...@salesforce.com>
> > wrote:
> >
> >> +1 (non-binding). Whilst 1.0 -> 2.0 seems like visibly a "big" jump in
> >> version numbers to have so soon after 1.0, it makes sense with semver
> >> requirements and the deprecations contained.
> >>
> >> Thanks
> >>
> >> Tom Crayford
> >> Heroku Kafka
> >>
> >> On Tue, Apr 24, 2018 at 11:20 AM, zhenya Sun  wrote:
> >>
> >>> no-binding。 +1
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> from my iphone!
> >>> On 04/24/2018 18:19, Sandor Murakozi wrote:
> >>> +1 (non-binding).
> >>> Thx Ismael
> >>>
> >>> On Thu, Apr 19, 2018 at 10:55 PM, Matt Farmer  wrote:
> >>>
>  +1 (non-binding). TY!
> 
>  On Thu, Apr 19, 2018 at 11:56 AM, tao xiao 
> >> wrote:
> 
> > +1 non-binding. thx Ismael
> >
> > On Thu, 19 Apr 2018 at 23:14 Vahid S Hashemian <
>  vahidhashem...@us.ibm.com>
> > wrote:
> >
> >> +1 (non-binding).
> >>
> >> Thanks Ismael.
> >>
> >> --Vahid
> >>
> >>
> >>
> >> From:   Jorge Esteban Quilcate Otoya 
> >> To: dev@kafka.apache.org
> >> Date:   04/19/2018 07:32 AM
> >> Subject:Re: [VOTE] Kafka 2.0.0 in June 2018
> >>
> >>
> >>
> >> +1 (non binding), thanks Ismael!
> >>
> >> El jue., 19 abr. 2018 a las 13:01, Manikumar (<
>  manikumar.re...@gmail.com
> >> )
> >> escribió:
> >>
> >>> +1 (non-binding).
> >>>
> >>> Thanks.
> >>>
> >>> On Thu, Apr 19, 2018 at 3:07 PM, Stephane Maarek <
> >>> steph...@simplemachines.com.au> wrote:
> >>>
>  +1 (non binding). Thanks Ismael!
> 
>  On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira, <
> >> g...@confluent.io>
> >> wrote:
> 
> > +1 (binding)
> >
> > On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma <
> >>> ism...@juma.me.uk
> >
> >>> wrote:
> >
> >> Hi all,
> >>
> >> I started a discussion last year about bumping the version
> >> of
>  the
> >>> June
> > 2018
> >> release to 2.0.0[1]. To reiterate the reasons in the
> >> original
> >> post:
> >>
> >> 1. Adopt KIP-118 (Drop Support for Java 7), which requires
> >> a
> > major
> > version
> >> bump due to semantic versioning.
> >>
> >> 2. Take the chance to remove deprecated code that was
>  deprecated
> >>> prior
>  to
> >> 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so
>  that
> >> we
> >>> can
> >> move faster.
> >>
> >> One concern that was raised is that we still do not have a
> > rolling
> > upgrade
> >> path for the old ZK-based consumers. Since the Scala
> >> clients
> >> haven't
>  been
> >> updated in a long time (they don't support security or the
>  latest
>  message
> >> format), users who need them can continue to use 1.1.0 with
> >>> no
> >> loss
> >>> of
> >> functionality.
> >>
> >> Since it's already mid-April and people seemed receptive
> >>> during
> >> the
> >> discussion last year, I'm going straight to a vote, but we
> >>> can
> >>> discuss
> > more
> >> if needed (of course).
> >>
> >> Ismael
> >>
> >> [1]
> >>
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.
> > apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93
> > =DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> > kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=
>  lBt342M2PM_
> > 4czzbFWtAc63571qsZGc9sfB7A5DlZPo=
> >>
> >> 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> >>
> >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > Follow us: Twitter <
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__
> > twitter.com_ConfluentInc=DwIFaQ=jf_iaSHvJObTbx-
>  siA1ZOg=Q_itwloTQj3_
> > xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-
> > XI9HTNmZdteu6wfk8=KcgJLWP_UEkzMrujjrbJA4QfHPDrJjcaWS95p2LGewU=
> >>> | blog
> > <
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.
> > confluent.io_blog=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> > itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-
> >> ltuLeZapDpOBc_8-
> > XI9HTNmZdteu6wfk8=XaV8g8yeT1koLf1dbc30NTzBdXB6GAj7FwD8J2VP7iY=
> >>>
> >
> 
> >>>
> >>
> 

Re: [VOTE] KIP-187 - Add cumulative count metric for all Kafka rate metrics

2017-08-29 Thread Stevo Slavić
+1 (non-binding)

On Tue, Aug 29, 2017 at 11:09 AM, Ismael Juma  wrote:

> Thanks for the KIP, +1 (binding) from me.
>
> Ismael
>
> On Thu, Aug 24, 2017 at 6:48 PM, Rajini Sivaram 
> wrote:
>
> > Hi all,
> >
> > I would like to start the vote on KIP-187 that adds a cumulative count
> > metric associated with each Kafka rate metric to improve downstream
> > processing of rate metrics. Details are here:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 187+-+Add+cumulative+count+metric+for+all+Kafka+rate+metrics
> >
> >
> > Thank you,
> >
> > Rajini
> >
>


Re: [DISCUSS] KIP-186: Increase offsets retention default to 7 days

2017-08-16 Thread Stevo Slavić
+1 for making consistent default log and offsets retention time.
I like Stephane's suggestion too, log retention override should override
offset retention too if not explicitly configured.

Please consider additionally:
- introducing offsets.retention.hours config property
- syncing log and offsets retention.check.interval.ms, if there's no real
reason for the two to differ
-- consider making retention check interval by default (if not explicitly
configured) a fraction of retention time
- name all "offsets" configs with "offsets" prefix (now it's a mix of
singular/"offset" and plural/"offsets")


On Fri, Aug 11, 2017 at 2:01 AM, Guozhang Wang  wrote:

> +1 from me
>
> On Wed, Aug 9, 2017 at 9:40 AM, Jason Gustafson 
> wrote:
>
> > +1 on the bump to 7 days. Wanted to mention one minor point. The
> > OffsetCommit RPC still provides the ability to set the retention time
> from
> > the client, but we do not use it in the consumer. Should we consider
> adding
> > a consumer config to set this? Given the problems people had with the old
> > default, such a config would probably have gotten a fair bit of use.
> Maybe
> > it's less necessary with the new default, but there may be situations
> where
> > you don't want to keep the offsets for too long. For example, the console
> > consumer commits offsets with a generated group id. We might want to set
> a
> > low retention time to keep it from filling the offset cache with garbage
> > from such groups.
> >
>
> I agree with Jason here, but maybe itself deserves a separate KIP
> discussion.
>
>
> >
> > -Jason
> >
> > On Wed, Aug 9, 2017 at 5:24 AM, Sönke Liebau <
> > soenke.lie...@opencore.com.invalid> wrote:
> >
> > > Just had this create issues at a customer as well, +1
> > >
> > > On Wed, Aug 9, 2017 at 11:46 AM, Mickael Maison <
> > mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Yes the current default is too short, +1
> > > >
> > > > On Wed, Aug 9, 2017 at 8:56 AM, Ismael Juma 
> wrote:
> > > > > Thanks for the KIP, +1 from me.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Wed, Aug 9, 2017 at 1:24 AM, Ewen Cheslack-Postava <
> > > e...@confluent.io
> > > > >
> > > > > wrote:
> > > > >
> > > > >> Hi all,
> > > > >>
> > > > >> I posted a simple new KIP for a problem we see with a lot of
> users:
> > > > >> KIP-186: Increase offsets retention default to 7 days
> > > > >>
> > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > >> 186%3A+Increase+offsets+retention+default+to+7+days
> > > > >>
> > > > >> Note that in addition to the KIP text itself, the linked JIRA
> > already
> > > > >> existed and has a bunch of discussion on the subject.
> > > > >>
> > > > >> -Ewen
> > > > >>
> > > >
> > >
> > >
> > >
> > > --
> > > Sönke Liebau
> > > Partner
> > > Tel. +49 179 7940878
> > > OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] 2017 October release planning and release version

2017-07-19 Thread Stevo Slavić
With 0.x version of project at least I found lots of unexpected painful
things acceptable.
Graduating from 0.*, with x.y.z semantic versioning IMO it should be
clearly communicated to community is there change in meaning, what's the
"SLO", what are the commitments, what change in every segment means.

E.g.
- APIs not labeled or labeled as stable
-- change in major version is only one that can break backward
compatibility (client APIs or behavior)
-- change in minor version can introduce new features, but not break
backward compatibility
-- change in patch version, is for bug fixes only.
- APIs labeled as evolving can be broken in backward incompatible way in
any release, but are assumed less likely to be broken compared to unstable
APIs
- APIs labeled as unstable can be broken in backward incompatible way in
any release, major, minor or patch
- deprecated stable APIs are treated as any stable APIs, they can be
removed only in major release, are not allowed to be changed in backward
incompatible way in either patch or minor version release

This means one should be able to upgrade server and recompile/deploy apps
with clients to new minor.patch release with dependency version change
being only change needed and there would be no drama.

Practice/"features" like protocol version being a parameter, and defaulting
to latest so auto updated with dependency update which introduces new
protocol/behavior should not be used in public client APIs. To switch
between backward incompatible APIs (contract and behaviors), ideally user
should explicitly have to change code and not dependency only, but at least
it should be clearly communicated that there are breaking changes to expect
even with just dependency update by e.g. giving major version release clear
meaning. If app dependency on Kafka client library minor.patch on same
major is updated, and if there's a change in behavior or API requiring app
code change - it's a bug.

Change introduced contrary to the SLO, is OK to be reported as bug.
Everything else is improvement or feature request.

If this was the case, and 1.0.0 was released today with APIs as they are
now, Scala client APIs even though deprecated would not break and require
refactoring with every 1.* minor/patch release, and would only be allowed
to be broken or removed in future major release, like 2.0.0

It should be also clear how long is each version supported - e.g. if
minor.patch had meaning that there are no backward incompatible changes,
it's OK to file a bug only for current major.minor.patch; previous major
and its last minor.patch can only have patches released up to some time
like 1 up to 3 months.

If there are changes in release cadence with new versioning, it should be
clear too.

Kind regards,
Stevo Slavic.

On Wed, Jul 19, 2017 at 1:21 AM, Ismael Juma  wrote:

> With regards to the annotations, I think we should expect that we'll always
> have some @Evolving APIs. Even though much of the platform is mature, we'll
> continue to improve and extend it. I'm generally not a fan of @Unstable
> (since there's rarely a reason to make breaking changes in bug fix release)
> and I would not mind if we removed them from the codebase for good.
>
> Ismael
>
> On Tue, Jul 18, 2017 at 4:07 PM, Guozhang Wang  wrote:
>
> > Currently the only @unstable annotations left are on Streams and one
> class
> > of Security modules, and I think we have a good chance of removing them
> all
> > in the next release.
> >
> > We also have a few @evolving annotations on the Admin, Streams, Security
> > modules etc. And I think we can try to also eliminate as many of them as
> > possible if people feel confident about these APIs but maybe a stretch
> goal
> > to get rid of all of them.
> >
> > Guozhang
> >
> > On Tue, Jul 18, 2017 at 3:49 PM, Gwen Shapira  wrote:
> >
> > > Also fine with the change in general.
> > >
> > > As you mentioned, 1.x indicates mature APIs, compatibility and
> stability.
> > > Are we going to remove the @unstable annotations in this release?
> > >
> > > Gwen
> > >
> > > On Tue, Jul 18, 2017 at 3:43 PM Ismael Juma  wrote:
> > >
> > > > Hi Guozhang,
> > > >
> > > > Thanks for volunteering to be the release manager for the next
> release!
> > > >
> > > > I am +1 on naming the next release 1.0.0. As you said, Kafka is
> mature
> > > > enough and this will make it easier for others to understand our
> > > versioning
> > > > policy.
> > > >
> > > > A couple of minor questions inline.
> > > >
> > > > On Tue, Jul 18, 2017 at 3:36 PM, Guozhang Wang 
> > > wrote:
> > > >
> > > > > major.minor.bugfix[.release-candidate]
> > > > >
> > > >
> > > > I think you mean major.minor.bugfix-rc (i.e. we typically use a dash
> > > > instead of dot for the RCx qualifier).
> > > >
> > > > >
> > > > > How do people feel about 1.0.0.x as the next Kafka version?
> > > >
> > > >
> > > > Do you mean 1.0.0?
> > > >
> > > > Ismael
> 

Re: [VOTE] 0.10.2.1 RC2

2017-04-21 Thread Stevo Slavić
Please consider including https://issues.apache.org/jira/browse/KAFKA-4814
in the new RC.

Kind regards,
Stevo Slavic.

On Fri, Apr 21, 2017 at 12:43 AM, Gwen Shapira  wrote:

> This is what happens when you do a last-minute merge :(
>
> Will roll a new RC tomorrow morning. Please find all the bugs tonight so we
> wouldn't need another RC...
>
> On Thu, Apr 20, 2017 at 11:35 AM, Jason Gustafson 
> wrote:
>
> > Hey Gwen,
> >
> > Eno Thereska found a blocking issue here:
> > https://issues.apache.org/jira/browse/KAFKA-5097. Sorry to say, but we
> > probably need another RC.
> >
> > -Jason
> >
> > On Wed, Apr 19, 2017 at 8:04 AM, Swen Moczarski <
> swen.moczar...@gmail.com>
> > wrote:
> >
> > > Hi Gwen,
> > > thanks for the new release candidate. Did a quick test, used the RC2 in
> > my
> > > recent project on client side, integration test against server version
> > > 0.10.1.1 worked well.
> > >
> > > +1 (non-binding)
> > >
> > > Regards,
> > > Swen
> > >
> > > 2017-04-19 16:26 GMT+02:00 Gwen Shapira :
> > >
> > > > Oops, good catch. I think we mislabeled it. Since it is in the
> release
> > > > source/binaries, I'll track it down and just re-generate the release
> > > notes.
> > > >
> > > > Gwen
> > > >
> > > > On Tue, Apr 18, 2017 at 11:38 AM, Edoardo Comar 
> > > wrote:
> > > >
> > > > > Thanks Gwen
> > > > >  KAFKA-5075 is not included in the
> > > > > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/
> > RELEASE_NOTES.html
> > > > >
> > > > > --
> > > > > Edoardo Comar
> > > > > IBM MessageHub
> > > > > eco...@uk.ibm.com
> > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > >
> > > > > IBM United Kingdom Limited Registered in England and Wales with
> > number
> > > > > 741598 Registered office: PO Box 41, North Harbour, Portsmouth,
> > Hants.
> > > > PO6
> > > > > 3AU
> > > > >
> > > > >
> > > > >
> > > > > From:   Gwen Shapira 
> > > > > To: dev@kafka.apache.org, Users ,
> > > Alexander
> > > > > Ayars 
> > > > > Date:   18/04/2017 15:59
> > > > > Subject:[VOTE] 0.10.2.1 RC2
> > > > >
> > > > >
> > > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the third candidate for release of Apache Kafka 0.10.2.1.
> > > > >
> > > > > It is a bug fix release, so we have lots of bug fixes, some super
> > > > > important.
> > > > >
> > > > > Release notes for the 0.10.2.1 release:
> > > > > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/
> > RELEASE_NOTES.html
> > > > >
> > > > > *** Please download, test and vote by Friday, 8am PST. ***
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > http://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > > https://repository.apache.org/content/groups/staging/
> > > > >
> > > > > * Javadoc:
> > > > > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc2/javadoc/
> > > > >
> > > > > * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
> > > > > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > > > > dea3da5b31cc310974685a8bbccc34a2ec2ac5c8
> > > > >
> > > > >
> > > > >
> > > > > * Documentation:
> > > > > http://kafka.apache.org/0102/documentation.html
> > > > >
> > > > > * Protocol:
> > > > > http://kafka.apache.org/0102/protocol.html
> > > > >
> > > > > /**
> > > > >
> > > > > Your help in validating this bugfix release is super valuable, so
> > > > > please take the time to test and vote!
> > > > >
> > > > > Suggested tests:
> > > > >  * Grab the source archive and make sure it compiles
> > > > >  * Grab one of the binary distros and run the quickstarts against
> > them
> > > > >  * Extract and verify one of the site docs jars
> > > > >  * Build a sample against jars in the staging repo
> > > > >  * Validate GPG signatures on at least one file
> > > > >  * Validate the javadocs look ok
> > > > >  * The 0.10.2 documentation was updated for this bugfix release
> > > > > (especially upgrade, streams and connect portions) - please make
> sure
> > > > > it looks ok: http://kafka.apache.org/documentation.html
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Gwen
> > > > >
> > > > >
> > > > >
> > > > > Unless stated otherwise above:
> > > > > IBM United Kingdom Limited - Registered in England and Wales with
> > > number
> > > > > 741598.
> > > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> > PO6
> > > > 3AU
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Gwen Shapira*
> > > > Product Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter  | blog
> > > > 

Re: Apache Kafka Docker official image

2017-02-16 Thread Stevo Slavić
For official Apache Kafka Docker image I'd expect that it's published by
Apache Kafka project from Dockerfile and related resources all living in
Kafka's sources, and as any official Kafka source/resource those would be
subject to Apache Kafka project community processes and practices.

On Thu, Feb 16, 2017 at 9:24 PM, Gwen Shapira  wrote:

> I'm not sure what "official image" means here...
>
> An image that gets tested by the Apache Kafka system tests? An image
> that the PMC votes on as part of the release process? Wouldn't the
> Apache official image need to be hosted on the Apache repository?
>
> I'd love to know more on how other Apache projects manage an official
> docker image.
>
> Gwen
>
> On Thu, Feb 16, 2017 at 8:04 AM, Gianluca Privitera
>  wrote:
> > Hi,
> > I’m currently proposing an official image for Apache Kafka in the Docker
> library ( https://github.com/docker-library/official-images/pull/2627 <
> https://github.com/docker-library/official-images/pull/2627> ).
> > I wanted to know if someone from Kafka upstream is interested in taking
> over or you are ok with me being the maintainer of the image.
> >
> > Let me know so I can speed up the process of the image approval.
> >
> > Thanks
> >
> > Gianluca Privitera
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [DISCUSS] KIP-115: Enforce offsets.topic.replication.factor

2017-01-31 Thread Stevo Slavić
t; > > > attempts enforcement only upon topic creation and I think anything
> else
> > > > should be beyond the scope of this KIP.
> > > >
> > > > The long answer:
> > > > Mismatch between existing RF and the "offsets.topic.replication.
> > factor"
> > > > config happens with:
> > > > a. topic creation paths 3-5 as defined in the KIP if the size of the
> > > > replicas set resulting from AdminUtils != "offsets.topic.replication.
> > > > factor"
> > > > b. topic creation path 6
> > > > c. a config change to the broker's "offsets.topic.replication.
> factor"
> > > > d. partition reassignments that expand the RF
> > > >
> > > > For all of these scenarios, I believe it all boils down to the intent
> > of
> > > > the imperfectly named "offsets.topic.replication.factor" and
> > > > "default.replication.factor" configs. These configs really only
> > represent
> > > > the RF to be used upon auto topic creation by the broker and are
> never
> > > > referenced anywhere else, whether it's "offsets.topic.replication.
> > > factor"
> > > > for the __consumer_offsets topic or "default.replication.factor" for
> > any
> > > > other topic.
> > > >
> > > > I think any RF mismatch after topic creation is beyond the scope of
> > this
> > > > discussion since the configs anyways were not intended to enforce RF
> > > > anywhere other than upon auto topic creation by the broker.
> > > >
> > > > Regarding Stevo's comment:
> > > > > On Thu, Jan 26, 2017 at 4:26 AM, Stevo Slavić <ssla...@gmail.com>
> > > wrote:
> > > > > test cluster. If this problem of offsets.topic.replication.factor
> > not
> > > > being
> > > > > enforced others also observed only in their tests only, than I
> don't
> > > like
> > > > > the KIP proposed change, of setting offsets.topic.replication.
> factor
> > > to
> > > > 1
> > > > > by default. I understand backward compatibility goals of this, but
> I
> > > can
> > > > > imagine late discovered production issues as consequences of this
> > > change.
> > > >
> > > > It's worth noting that the KafkaConfig default for
> > > > "offsets.topic.replication.factor" is still 3. This KIP merely
> changes
> > > the
> > > > config/server.properties default to 1 so that the quickstart
> > > > "./bin/kafka-server-start.sh config/server.properties" continues to
> run
> > > > smoothly.
> > > >
> > > > On Thu, Jan 26, 2017 at 9:34 AM, Colin McCabe <cmcc...@apache.org>
> > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > +1 (non-binding) for KIP-115.
> > > > >
> > > > > On Thu, Jan 26, 2017, at 04:26, Stevo Slavić wrote:
> > > > > > If I understood well, this KIP is trying to solve for the problem
> > of
> > > > > > offsets.topic.replication.factor not being enforced,
> particularly
> > in
> > > > > > context of  "when you have clients or tooling running as the
> > cluster
> > > is
> > > > > > getting setup". Assuming that this problem was observed in
> > > production,
> > > > so
> > > > > > in non-testing only conditions, would it make sense to introduce
> > > > > > additional
> > > > > > property - min number of alive brokers before offsets topic is
> > > allowed
> > > > to
> > > > > > be created?
> > > > >
> > > > > It's an interesting idea, but... is there a user use-case for a
> > > property
> > > > > like this?  I'm having a hard time thinking of one, but maybe I
> > missed
> > > > > something.
> > > > >
> > > > > cheers,
> > > > > Colin
> > > > >
> > > > > >
> > > > > > Currently offsets.topic.replication.factor is used for that
> > purpose,
> > > > so
> > > > > > with offsets.topic.replication.factor set to 3 it's enough to
> have
> > > > just
> > > > > 3
> > > > > > brokers up for offsets topic to be created. Then all

Re: [DISCUSS] KIP-115: Enforce offsets.topic.replication.factor

2017-01-26 Thread Stevo Slavić
If I understood well, this KIP is trying to solve for the problem of
offsets.topic.replication.factor not being enforced, particularly in
context of  "when you have clients or tooling running as the cluster is
getting setup". Assuming that this problem was observed in production, so
in non-testing only conditions, would it make sense to introduce additional
property - min number of alive brokers before offsets topic is allowed to
be created?

Currently offsets.topic.replication.factor is used for that purpose, so
with offsets.topic.replication.factor set to 3 it's enough to have just 3
brokers up for offsets topic to be created. Then all replicas of all (by
default 50) partitions of this topic would be spread out over just these 3
brokers, while eventually entire cluster might be much larger in size and
would benefit from wider spread of consumer offsets topic partitions
leadership.

One can achieve wider spread later, manually. But that would first have to
be detected, and then use provided CLI/scripts to change replica
assignment. IMO it would be better if it was possible to configure desired
spread, even if just indirectly through configuring min number of alive
brokers. If not overriden in server.properties, this new property can
default to offsets.topic.replication.factor

I've been bitten by problem of offsets.topic.replication.factor not being
enforced but only in testing, integration tests, it was almost
unpredictable when offsets topic is ready, test cluster initialized, would
get lots of false failures, unstable tests, but eventually got to
predictable deterministic test behavior, found ways to fully initialize
test cluster. If this problem of offsets.topic.replication.factor not being
enforced others also observed only in their tests only, than I don't like
the KIP proposed change, of setting offsets.topic.replication.factor to 1
by default. I understand backward compatibility goals of this, but I can
imagine late discovered production issues as consequences of this change.
So I wouldn't like to trade off production issues probability for testing
convenience.

Current Kafka documentation has nice note about
offsets.topic.replication.factor and related behavior. New note about new
default would have to be a warning in bold and red in docs, and every
broker should output proper warning in log if configuration for
offsets.topic.replication.factor is on new proposed default of 1.

Kind regards,
Stevo Slavic.

On Thu, Jan 26, 2017 at 8:43 AM, James Cheng  wrote:

>
> > On Jan 25, 2017, at 9:26 PM, Joel Koshy  wrote:
> >
> > already voted, but one thing worth considering (since this KIP speaks of
> > *enforcement*) is desired behavior if the topic already exists and the
> > config != existing RF.
> >
>
> Yeah, I'm curious about this too.
>
> -James
>
> > On Wed, Jan 25, 2017 at 4:30 PM, Dong Lin  wrote:
> >
> >> +1
> >>
> >> On Wed, Jan 25, 2017 at 4:22 PM, Ismael Juma  wrote:
> >>
> >>> An important question is if this needs to wait for a major release or
> >> not.
> >>>
> >>> Ismael
> >>>
> >>> On Thu, Jan 26, 2017 at 12:19 AM, Ismael Juma 
> wrote:
> >>>
>  +1 from me too.
> 
>  Ismael
> 
>  On Thu, Jan 26, 2017 at 12:07 AM, Ewen Cheslack-Postava <
> >>> e...@confluent.io
> > wrote:
> 
> > +1
> >
> > Since this is an unusual one, I think it's worth pointing out that
> the
> >>> KIP
> > notes it is really a bug fix, but since it has compatibility
> >>> implications
> > the KIP was worth it. It was a sort of intentional bug, but confusing
> >>> and
> > dangerous.
> >
> > Seems important to fix this ASAP since people are hitting this in
> >>> practice
> > and would have to go out of their way to set up monitoring to catch
> >> the
> > issue.
> >
> > -Ewen
> >
> > On Wed, Jan 25, 2017 at 4:02 PM, Jason Gustafson  >
> > wrote:
> >
> >> +1 from me. The current behavior seems both surprising and
> >> dangerous.
> >>
> >> -Jason
> >>
> >> On Wed, Jan 25, 2017 at 3:58 PM, Onur Karaman <
> >> onurkaraman.apa...@gmail.com>
> >> wrote:
> >>
> >>> Hey everyone.
> >>>
> >>> I made a bug-fix KIP-115 to enforce offsets.topic.replication.
> >>> factor:
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> 115%3A+Enforce+offsets.topic.replication.factor
> >>>
> >>> Comments are welcome.
> >>>
> >>> - Onur
> >>>
> >>
> >
> 
> 
> >>>
> >>
>
>


Re: [VOTE] KIP-109: Old Consumer Deprecation

2017-01-11 Thread Stevo Slavić
+1 (non-binding) and for deprecating it ASAP. It's already actually
deprecated, not supported, new features and bug fixes end up only in new
clients API, so would be fair to communicate clearly to users in old
consumer API that it's deprecated, it's further or new use is discouraged
and if one still continues to or especially decides to starts using it that
you're using it at your own risk. Deprecation is just recommendation.

Wish SimpleConsumer was never part of public API.

On Thu, Jan 12, 2017 at 12:24 AM, Ismael Juma  wrote:

> Ewen,
>
> I think a policy of giving it a minimum of one year between deprecation and
> removal for this case seems reasonable.
>
> Ismael
>
> On Wed, Jan 11, 2017 at 5:45 AM, Ewen Cheslack-Postava 
> wrote:
>
> > Ismael,
> >
> > Is that regardless of whether it ends up being a major/minor version?
> i.e.
> > given the way we've phrased (and I think started to follow through on)
> > deprecations, if the next releases were 0.10.3.0 and then 0.11.0.0, the
> > deprecation period would only be one release. That would be a tiny window
> > for a huge deprecation. If the next release ended up 0.11.0.0, then we'd
> > wait (presumably multiple releases until) 0.12.0.0 which could be
> something
> > like a year.
> >
> > I think we should deprecate the APIs ASAP since they are effectively
> > unmaintained (or very minimally maintained at best). And I'd actually
> even
> > like to do so in 0.10.2.0.
> >
> > Perhaps we should consider a slightly customized policy instead? Major
> > deprecations like this might require something slightly different. For
> > example, I think a KIP + release notes that explain we're marking the
> > consumer as deprecated now but it will continue to exist for at least 1
> > year (regardless of release versions) and will be removed in the next
> major
> > release *after* 1 year would give users plenty of warning and not result
> in
> > any weirdness if a major version bump happens relatively soon.
> >
> > (Sorry to drag this into the VOTE thread... If we can agree on that
> > deprecation/removal schedule, I'd love to still get this in by feature
> > freeze, especially since the patch is presumably trivial.)
> >
> > -Ewen
> >
> > On Tue, Jan 10, 2017 at 11:58 AM, Gwen Shapira 
> wrote:
> >
> > > +1
> > >
> > > On Mon, Jan 9, 2017 at 8:58 AM, Vahid S Hashemian
> > >  wrote:
> > > > Happy Monday,
> > > >
> > > > I'd like to thank everyone who participated in the discussion around
> > this
> > > > KIP and shared their opinion.
> > > >
> > > > The only concern that was raised was not having a defined migration
> > plan
> > > > yet for existing users of the old consumer.
> > > > I hope that responses to this concern (on the discussion thread) have
> > > been
> > > > satisfactory.
> > > >
> > > > Given the short time we have until the 0.10.2.0 cut-off date I'd like
> > to
> > > > start voting on this KIP.
> > > >
> > > > KIP:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 109%3A+Old+Consumer+Deprecation
> > > > Discussion thread:
> > > > https://www.mail-archive.com/dev@kafka.apache.org/msg63427.html
> > > >
> > > > Thanks.
> > > > --Vahid
> > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Gwen Shapira
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> >
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Stevo Slavić
+1 (non-binding)

On Thu, Jan 12, 2017 at 12:11 AM, Guozhang Wang  wrote:

> +1
>
> On Wed, Jan 11, 2017 at 12:09 PM, Jeff Widman  wrote:
>
> > +1 nonbinding. We were bit by this in a production environment.
> >
> > On Wed, Jan 11, 2017 at 11:42 AM, Ian Wrigley  wrote:
> >
> > > +1 (non-binding)
> > >
> > > > On Jan 11, 2017, at 11:33 AM, Jay Kreps  wrote:
> > > >
> > > > +1
> > > >
> > > > On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford 
> > wrote:
> > > >
> > > >> Looks like there was a good consensus on the discuss thread for
> > KIP-106
> > > so
> > > >> lets move to a vote.
> > > >>
> > > >> Please chime in if you would like to change the default for
> > > >> unclean.leader.election.enabled from true to false.
> > > >>
> > > >> https://cwiki.apache.org/confluence/display/KAFKA/%
> > > >> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> > > >> election.enabled+from+True+to+False
> > > >>
> > > >> B
> > > >>
> > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [RELEASE UPDATE] Postponing the next release candidates and canceling current vote

2016-04-04 Thread Stevo Slavić
Sad to see release got postponed. I was looking forward to rack aware
replica assignment, to have HA support out of the box. It is hard to follow
all the different discussions and what actually caused release to be
postponed. Was rack aware feature one of the controversial features? If not
would it be possible to get it and other non-controversial new features and
fixes shipped sooner?

Kind regards,
Stevo Slavic.

On Mon, Apr 4, 2016 at 5:07 AM, Ewen Cheslack-Postava 
wrote:

> Just wanted to throw it out there that still double committing when the
> committer remembers to do so is useful -- daily updates on unit tests (as
> flaky as they can be) and system tests are still useful to have. Better to
> catch any branch-specific issues as early as possible.
>
> -Ewen
>
> On Fri, Apr 1, 2016 at 1:06 PM, Guozhang Wang  wrote:
>
> > Sounds good.
> >
> > On Fri, Apr 1, 2016 at 12:01 PM, Gwen Shapira  wrote:
> >
> > > I like the alternative. I'll be happy to do the weekly merges.
> > >
> > > Would be happy to hear other opinions.
> > >
> > > Gwen
> > >
> > > On Fri, Apr 1, 2016 at 11:55 AM, Ismael Juma 
> wrote:
> > >
> > > > My concern is that this is error-prone and things can be missed (it
> > > > happened during the 0.9.0.0 release for example). It's a cost worth
> > > paying
> > > > when stabilising but not so clear when accepting major new features.
> > > >
> > > > One alternative would be to just commit to trunk and merge trunk to
> > > 0.10.0
> > > > weekly or something along those lines.
> > > >
> > > > Guozhang, we could delete the branch, but users could be relying on
> it
> > > and
> > > > hence I am not sure we should do that.
> > > >
> > > > Ismael
> > > > On 1 Apr 2016 19:44, "Gwen Shapira"  wrote:
> > > >
> > > > > I prefer keeping the current branch and double-committing for three
> > > > weeks.
> > > > > Not fun, but not end-of-world hard.
> > > > >
> > > > > Unless committers object?
> > > > >
> > > > > On Fri, Apr 1, 2016 at 11:40 AM, Guozhang Wang  >
> > > > wrote:
> > > > >
> > > > > > Ismael,
> > > > > >
> > > > > > Shall we "delete" the 0.10.0 branch after going through its
> commits
> > > and
> > > > > > making sure all of them are already in trunk then? I think it is
> > > doable
> > > > > in
> > > > > > github?
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > > On Fri, Apr 1, 2016 at 11:24 AM, Jason Gustafson <
> > ja...@confluent.io
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hey Gwen,
> > > > > > >
> > > > > > > KIP-52 would be nice to get in as well. It's a small feature,
> but
> > > > > really
> > > > > > > helpful for Connect users. A patch for the first half is
> already
> > > > > > available,
> > > > > > > though it may need adjustment depending on the discussion.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Jason
> > > > > > >
> > > > > > > On Fri, Apr 1, 2016 at 11:09 AM, Ismael Juma <
> ism...@juma.me.uk>
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi Gwen,
> > > > > > > >
> > > > > > > > What is the plan for the 0.10.0 branch? Double-committing
> > seems a
> > > > bit
> > > > > > > > wasteful given this change.
> > > > > > > >
> > > > > > > > Ismael
> > > > > > > >
> > > > > > > > On 1 Apr 2016 18:54, "Gwen Shapira" 
> wrote:
> > > > > > > >
> > > > > > > > > Hey Team Kafka,
> > > > > > > > >
> > > > > > > > > Per community discussion, I will not be rolling out a new
> > > > candidate
> > > > > > on
> > > > > > > > > Monday.
> > > > > > > > >
> > > > > > > > > I will roll out the next release candidate in three weeks:
> > > > Friday,
> > > > > > > April
> > > > > > > > > 22.
> > > > > > > > > We can spend Kafka Summit discussing the quality of the
> > release
> > > > :)
> > > > > > > > >
> > > > > > > > > The goal is to get it the following improvements:
> > > > > > > > > KIP-4-metadata-update
> > > > > > > > > KIP-35 (version protocol)
> > > > > > > > > KIP-33 (time-based indexes)
> > > > > > > > > KIP-43 (flexible SASL)
> > > > > > > > > KIP-50 (Tiny ACL API change)
> > > > > > > > > KIP-51 (small KafkaConnect API change)
> > > > > > > > >
> > > > > > > > > Committers and contributors: Please stay on top of reviews
> > and
> > > > > > > > discussions.
> > > > > > > > > Lets keep the awesome forward momentum we have going!
> > > > > > > > >
> > > > > > > > > Yours,
> > > > > > > > >
> > > > > > > > > Gwen Shapira
> > > > > > > > > Temporary Release Manager
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
>
> --
> Thanks,
> Ewen
>


Re: [DISCUSS] KIP-30 Allow for brokers to have plug-able consensus and meta data storage sub systems

2016-03-09 Thread Stevo Slavić
Imagine a small team with very limited budget and operational capacity that
wants to use Kafka, and needs coordination service for other things too
(e.g. service discovery). It's limiting choices too hard to just ZooKeeper
if Kafka supports ZooKeeper only. It would be better if Kafka's preferred
choice was not the only choice. Maybe using abstraction, Kafka own or
through 3rd party like https://github.com/spring-cloud/spring-cloud-cluster
would give Kafka users more options.

Kind regards,
Stevo Slavic.

On Fri, Dec 4, 2015 at 3:16 AM, Todd Palino  wrote:

> Come on, Jay. Anyone can get up in the morning and run if they have the
> willpower :)
>
> Granted I do have some bias here, since we have tooling in place that makes
> deployments and monitoring easier. But even at that, I would not say
> Zookeeper is difficult to run or monitor. I’m not denying that there are
> complaints, but my experience has always been that complaints of that type
> are either related more to the specific way the dependency is implemented
> (that is, Kafka’s not using it correctly or is otherwise generating errors
> that say “Zookeeper” in them), or it’s related to a bias against the
> dependency (I don’t like Zookeeper, I have XYZ installed, etc.).
>
> The point that for a small installation Zookeeper can represent a large
> footprint is well made. I wonder how many of these people are being
> ill-served by recommendations from people like me that you should not run
> Kafka and Zookeeper on the same systems. Sure, we’d never do that at
> LinkedIn just because we are looking for high performance and a few more
> systems isn’t a big deal. But for a lower performance environment, it’s
> really not a problem to colocate the applications.
>
> As far as the controller goes, I’m perfectly willing to accept that my
> desire to get rid of it is from my bias against it because of how many
> problems we’ve run into with that code. We can probably both agree that the
> controller code needs an overhaul regardless. It’s stood up well, but as
> the clusters get larger it’s definitely shows cracks.
>
> -Todd
>
>
> On Thu, Dec 3, 2015 at 11:37 AM, Jay Kreps  wrote:
>
> > Hey Todd,
> >
> > I actually agree on both counts.
> >
> > I would summarize the first comment as "Zookeeper is not hard to
> > operationalize if you are Todd Palino"--also in that category of
> > things that are not hard: running 13 miles at 5:00 am. Basically I
> > totally agree that ZK is now a solved problem at LinkedIn. :-)
> >
> > Empirically, though, it is really hard for a lot of our users. It is
> > one of the largest sources of problems we see in people's clusters. We
> > could perhaps get part of the way by improving our zk usage and
> > documentation, and it is certainly the case that we could potentially
> > make things worse in trying to make them better, but I don't think
> > that is the same as saying there isn't a problem.
> >
> > I totally agree with your second comment. In some sense what I was
> > sketching out is just replacing ZK. But part of the design of Kafka
> > was because we already had ZK. So there might be a way to further
> > rationalize the metadata log and the data logs if you kind of went
> > back to first principles and thought about it. I don't have any idea
> > how, but I share that intuition.
> >
> > I do think having the controller, though, is quite useful. I think
> > this pattern of avoiding many rounds of consensus by just doing one
> > round to pick a leader is a good one. If you think about it Paxos =>
> > Multi-paxos is basically optimizing by lumping together consensus
> > rounds on a per message basis into a leader which then handles many
> > messages, and what Kafka does is kind of like Multi-multi-paxos in
> > that it lumps together many leadership elections into one central
> > controller election which then picks all the leaders. In some ways
> > having central decision makers seems inelegant (aren't we supposed to
> > be distributed?) but it does allow you to be both very very fast in
> > making lots of decisions (vs doing thousands of independent leadership
> > elections) and also to do things that require global knowledge (like
> > balancing leadership).
> >
> > Cheers,
> >
> > -Jay
> >
> >
> >
> > On Thu, Dec 3, 2015 at 10:05 AM, Todd Palino  wrote:
> > > This kind of discussion always puts me in mind of stories that start
> “So
> > I
> > > wrote my own encryption. How hard can it be?” :)
> > >
> > > Joking aside, I do have a few thoughts on this. First I have to echo
> > Joel’s
> > > perspective on Zookeeper. Honestly, it is one of the few applications
> we
> > > can forget about, so I have a hard time understanding pain around
> running
> > > it. You set it up, and unless you have a hardware failure to deal with,
> > > that’s it. Yes, there are ways to abusively use it, just like with any
> > > application, but Kafka is definitely not one of those use cases. I also

Re: [POLL] Make next Kafka Release 0.10.0.0 instead of 0.9.1.0

2016-02-09 Thread Stevo Slavić
+1 0.10.0.0

On Tue, Feb 9, 2016, 19:08 Becket Qin  wrote:

> Hi All,
>
> Next Kafka release will have several significant important new
> feature/changes such as Kafka Stream, Message Format Change, Client
> Interceptors and several new consumer API changes, etc. We feel it is
> better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
>
> We would like to see what do people think of making the next release
> 0.10.0.0.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>


Re: Replication Broken between Kafka 0.8.2.1 and 0.9 (trunk)

2015-11-06 Thread Stevo Slavić
Those docs at https://kafka.apache.org/090/documentation.html#upgrade ate
still mentioning 0.8.3 instead of 0.9.0. Is there JIRA already to fix this?

On Thu, Nov 5, 2015 at 9:28 PM, Grant Henke  wrote:

> Hi Matthew,
>
> I have not read into the details of your issues but have done similar
> "rolling" upgrade testing myself. The reason replication breaks is due to
> some wire protocol changes.
>
> Just checking some preliminary things before digging in
>
>- Have you followed the upgrade steps outlined here?
>   - https://kafka.apache.org/090/documentation.html#upgrade
>- Does setting inter.broker.protocol.version=0.8.2.X resolve the issue?
>   - Note: you need to unset and restart again after all brokers are
>   upgraded.
>
> In the future KIP-35 may help alleviate the manual step of setting the
> inter.broker.protocol.version. You can read more about KIP-35 and
> participate in the discussion/design here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version
>
> Thanks,
> Grant
>
>
> On Thu, Nov 5, 2015 at 2:18 PM, Matthew Bruce 
> wrote:
>
> > Hello Kafka Devs,
> >
> > I've been testing the upgrade procedure between Kafka 0.8.2.1 and Kafka
> > 0.9.0.0 and have been having Replication issues between the two version,
> > and I was wondering if anyone was aware of this issue (I just searched
> and
> > this seems to be related to KAFKA-2750 raised yesterday ).
> >
> > I start with 3 brokers running 0.8.2.1 all that contain data (1 topic
> with
> > 10 partitions), then I shut down one of the brokers, upgrade it to 0.9.0
> > (making sure to set 'inter.broker.protocol.version=0.8.2.X' in
> > broker.properties).  Once the Broker is started I see errors like the
> > following:
> >
> > [2015-11-05 19:13:10,309] WARN [ReplicaFetcherThread-0-182050600], Error
> > in fetch kafka.server.ReplicaFetcherThread$FetchRequest@6cc18858 > kafka.server.ReplicaFetcherThread$FetchRequest@6cc18858>. Possible
> cause:
> > org.apache.kafka.common.protocol.types.SchemaException: Error reading
> field
> > 'responses': Error reading field 'topic':
> java.nio.BufferUnderflowException
> > (kafka.server.ReplicaFetcherThread)
> > And
> > [2015-11-03 16:55:15,178] WARN [ReplicaFetcherThread-1-182050600], Error
> > in fetch kafka.server.ReplicaFetcherThread$FetchRequest@224388b2 > kafka.server.ReplicaFetcherThread$FetchRequest@224388b2>. Possible
> cause:
> > org.apache.kafka.common.protocol.types.SchemaException: Error reading
> field
> > 'responses': Error reading field 'partition_responses': Error reading
> field
> > 'record_set': java.lang.IllegalArgumentException
> > (kafka.server.ReplicaFetcherThread)
> >
> >
> > I've spent some time in the Kafka code, and packet captures/wireshark
> > trying to figure this out, and I believe there is an issue in
> > org.apache.kafka.clients.networkClient.java in the
> handleCompletedReceives
> > function:
> > When extracting the response body, this function is using
> > ProtoUtils.currentResponseSchema instead of ProtoUtils.ResponseSchema
> and
> > specifying the API version required by inter.broker.protocol.version.
> > Struct body = (Struct)
> > ProtoUtils.currentResponseSchema(apiKey).read(receive.payload());
> >
> > This results in errors when the newer version of a Schema
> > (FETCH_RESPONSE_V1 instead of FETCH_RESPONSE_V0) is applied against the
> > fetch response returned by the 0.8.2.1 broker
> >
> >
> > Thanks,
> > Matthew Bruce
> > mbr...@blackberry.com
> >
> >
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>


Re: 0.9.0.0 remaining jiras

2015-09-14 Thread Stevo Slavić
Jun,

Would be nice to have https://issues.apache.org/jira/browse/KAFKA-2106 (if
not that, than related https://issues.apache.org/jira/browse/KAFKA-1792 )
in 0.9 release. Both have patch provided. If KAFKA-2106 is delivered, maybe
KAFKA-1792 is redundant, not needed.

For some reason KAFKA-2106 has "Affects Version/s" set to 0.9, maybe "Fix
Version/s" should be set to that value instead.

Kind regards,
Stevo Slavic.



On Mon, Sep 14, 2015 at 9:43 AM, Stevo Slavić <ssla...@gmail.com> wrote:

> Hello Jason,
>
> Maybe this answers your question:
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201509.mbox/%3CCAFc58G-UScVKrSF1kdsowQ8Y96OAaZEdiZsk40G8fwf7iToFaw%40mail.gmail.com%3E
>
> Kind regards,
> Stevo Slavic.
>
>
> On Mon, Sep 14, 2015 at 8:56 AM, Jason Rosenberg <j...@squareup.com> wrote:
>
>> Hi Jun,
>>
>> Can you clarify, will there not be a 0.8.3.0 (and instead we move straight
>> to 0.9.0.0)?
>>
>> Also, can you outline the man new features/updates for 0.9.0.0?
>>
>> Thanks,
>>
>> Jason
>>
>> On Sat, Sep 12, 2015 at 12:40 PM, Jun Rao <j...@confluent.io> wrote:
>>
>> > The following is a candidate list of jiras that we want to complete in
>> the
>> > upcoming release (0.9.0.0). Our goal is to finish at least all the
>> blockers
>> > and as many as the non-blockers possible in that list.
>> >
>> >
>> >
>> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%200.9.0.0
>> >
>> > Anything should be added/removed from this list?
>> >
>> > We are shooting to cut an 0.9.0.0 release branch in early October.
>> >
>> > Thanks,
>> >
>> > Jun
>> >
>>
>
>


Re: 0.9.0.0 remaining jiras

2015-09-14 Thread Stevo Slavić
Hello Jason,

Maybe this answers your question:
http://mail-archives.apache.org/mod_mbox/kafka-dev/201509.mbox/%3CCAFc58G-UScVKrSF1kdsowQ8Y96OAaZEdiZsk40G8fwf7iToFaw%40mail.gmail.com%3E

Kind regards,
Stevo Slavic.


On Mon, Sep 14, 2015 at 8:56 AM, Jason Rosenberg  wrote:

> Hi Jun,
>
> Can you clarify, will there not be a 0.8.3.0 (and instead we move straight
> to 0.9.0.0)?
>
> Also, can you outline the man new features/updates for 0.9.0.0?
>
> Thanks,
>
> Jason
>
> On Sat, Sep 12, 2015 at 12:40 PM, Jun Rao  wrote:
>
> > The following is a candidate list of jiras that we want to complete in
> the
> > upcoming release (0.9.0.0). Our goal is to finish at least all the
> blockers
> > and as many as the non-blockers possible in that list.
> >
> >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%200.9.0.0
> >
> > Anything should be added/removed from this list?
> >
> > We are shooting to cut an 0.9.0.0 release branch in early October.
> >
> > Thanks,
> >
> > Jun
> >
>


Re: Maybe 0.8.3 should really be 0.9.0?

2015-09-09 Thread Stevo Slavić
+1 (non-binding) for 0.9

On Wed, Sep 9, 2015 at 6:41 AM, Jun Rao  wrote:

> +1 for 0.9.
>
> Thanks,
>
> Jun
>
> On Tue, Sep 8, 2015 at 3:04 PM, Ismael Juma  wrote:
>
> > +1 (non-binding) for 0.9.
> >
> > Ismael
> >
> > On Tue, Sep 8, 2015 at 10:19 AM, Gwen Shapira  wrote:
> >
> > > Hi Kafka Fans,
> > >
> > > What do you think of making the next release (the one with security,
> new
> > > consumer, quotas, etc) a 0.9.0 instead of 0.8.3?
> > >
> > > It has lots of new features, and new consumer was pretty much scoped
> for
> > > 0.9.0, so it matches our original roadmap. I feel that so many awesome
> > > features deserve a better release number.
> > >
> > > The downside is mainly some confusion (we refer to 0.8.3 in bunch of
> > > places), and noisy emails from JIRA while we change "fix version" field
> > > everywhere.
> > >
> > > Thoughts?
> > >
> >
>


Re: estimation of Kafka 0.8.3 release

2015-08-28 Thread Stevo Slavić
Response I got recently
http://mail-archives.apache.org/mod_mbox/kafka-users/201508.mbox/%3cCAFc58G8pvjH_8A0=41dzj5xpe80j+m_skpzadxnog9qxzf1...@mail.gmail.com%3e

On Fri, Aug 28, 2015, 16:46 Stefan Miklosovic mikloso...@gmail.com wrote:

 Hi,

 I am working on some project which is using Kafka heavilly. Becasue of
 the latest API changes, I am forced to use 0.8.3-SNAPSHOT version and
 I am releasing this project on my own to my private Maven repository.

 I need to make that project public / official and I can not depend on
 a SNAPSHOT version.

 Could you please give me some time estimation as when 0.8.3 will be
 out? It seems a very long time since 0.8.2 really.

 Thanks!

 --
 Stefan Miklosovic



Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-17 Thread Stevo Slavić
/KAFKA-2198:
 kafka-topics.sh exits with 0 status on failures
 - KAFKA-2235 https://issues.apache.org/jira/browse/KAFKA-2235:
 LogCleaner offset map overflow
 - KAFKA-2241 https://issues.apache.org/jira/browse/KAFKA-2241:
 AbstractFetcherThread.shutdown() should not block on
 ReadableByteChannel.read(buffer)
 - KAFKA-2272 https://issues.apache.org/jira/browse/KAFKA-2272:
 listeners endpoint parsing fails if the hostname has capital letter
 - KAFKA-2345 https://issues.apache.org/jira/browse/KAFKA-2345:
 Attempt to delete a topic already marked for deletion throws
 ZkNodeExistsException
 - KAFKA-2353 https://issues.apache.org/jira/browse/KAFKA-2353:
 SocketServer.Processor should catch exception and close the socket
 properly
 in configureNewConnections.
 - KAFKA-1836 https://issues.apache.org/jira/browse/KAFKA-1836:
 metadata.fetch.timeout.ms set to zero blocks forever
 - KAFKA-2317 https://issues.apache.org/jira/browse/KAFKA-2317:
 De-register
 isrChangeNotificationListener on controller resignation
 
  Note: KAFKA-2120 https://issues.apache.org/jira/browse/KAFKA-2120 
  KAFKA-2421 https://issues.apache.org/jira/browse/KAFKA-2421 were
  mentioned in previous emails, but are not in the list because they are
 not
  committed yet.
 
  Hope that helps the effort.
 
  Thanks,
  Grant
 
  On Mon, Aug 17, 2015 at 12:09 AM, Grant Henke ghe...@cloudera.com
 wrote:
 
  +1 to that suggestion. Though I suspect that requires a committer to do.
  Making it part of the standard commit process could work too.
  On Aug 16, 2015 11:01 PM, Gwen Shapira g...@confluent.io wrote:
 
  BTW. I think it will be great for Apache Kafka to have a 0.8.2 release
  manager who's role is to cherrypick low-risk bug-fixes into the 0.8.2
  branch and once enough bug fixes happened (or if sufficiently critical
  fixes happened) to roll out a new maintenance release (with every 3
 month
  as a reasonable bugfix release target).
 
  This will add some predictability regarding how fast we release fixes
 for
  bugs.
 
  Gwen
 
  On Sun, Aug 16, 2015 at 8:09 PM, Jeff Holoman jholo...@cloudera.com
  wrote:
 
   +1 for the release and also including
  
   https://issues.apache.org/jira/browse/KAFKA-2114
  
   Thanks
  
   Jeff
  
   On Sun, Aug 16, 2015 at 2:51 PM, Stevo Slavić ssla...@gmail.com
  wrote:
  
+1 (non-binding) for 0.8.2.2 release
   
Would be nice to include in that release new producer resiliency
 bug
   fixes
https://issues.apache.org/jira/browse/KAFKA-1788 and
https://issues.apache.org/jira/browse/KAFKA-2120
   
On Fri, Aug 14, 2015 at 4:03 PM, Gwen Shapira g...@confluent.io
  wrote:
   
 Will be nice to include Kafka-2308 and fix two critical snappy
  issues
   in
 the maintenance release.

 Gwen
 On Aug 14, 2015 6:16 AM, Grant Henke ghe...@cloudera.com
  wrote:

  Just to clarify. Will KAFKA-2189 be the only patch in the
  release?
 
  On Fri, Aug 14, 2015 at 7:35 AM, Manikumar Reddy 
   ku...@nmsworks.co.in

  wrote:
 
   +1  for 0.8.2.2 release
  
   On Fri, Aug 14, 2015 at 5:49 PM, Ismael Juma 
  ism...@juma.me.uk
 wrote:
  
I think this is a good idea as the change is minimal on our
  side
and
 it
   has
been tested in production for some time by the reporter.
   
Best,
Ismael
   
On Fri, Aug 14, 2015 at 1:15 PM, Jun Rao j...@confluent.io
 
   wrote:
   
 Hi, Everyone,

 Since the release of Kafka 0.8.2.1, a number of people
 have
 reported
  an
 issue with snappy compression (
 https://issues.apache.org/jira/browse/KAFKA-2189).
  Basically,
   if
  they
use
 snappy in 0.8.2.1, they will experience a 2-3X space
  increase.
The
   issue
 has since been fixed in trunk (just a snappy jar
 upgrade).
   Since
  0.8.3
   is
 still a few months away, it may make sense to do an
 0.8.2.2
release
   just
to
 fix this issue. Any objections?

 Thanks,

 Jun

   
  
 
 
 
  --
  Grant Henke
  Software Engineer | Cloudera
  gr...@cloudera.com | twitter.com/gchenke |
   linkedin.com/in/granthenke
 

   
  
  
  
   --
   Jeff Holoman
   Systems Engineer
  
 
 
 
 
  --
  Grant Henke
  Software Engineer | Cloudera
  gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
 



 --
 Grant Henke
 Software Engineer | Cloudera
 gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke



Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-16 Thread Stevo Slavić
+1 (non-binding) for 0.8.2.2 release

Would be nice to include in that release new producer resiliency bug fixes
https://issues.apache.org/jira/browse/KAFKA-1788 and
https://issues.apache.org/jira/browse/KAFKA-2120

On Fri, Aug 14, 2015 at 4:03 PM, Gwen Shapira g...@confluent.io wrote:

 Will be nice to include Kafka-2308 and fix two critical snappy issues in
 the maintenance release.

 Gwen
 On Aug 14, 2015 6:16 AM, Grant Henke ghe...@cloudera.com wrote:

  Just to clarify. Will KAFKA-2189 be the only patch in the release?
 
  On Fri, Aug 14, 2015 at 7:35 AM, Manikumar Reddy ku...@nmsworks.co.in
  wrote:
 
   +1  for 0.8.2.2 release
  
   On Fri, Aug 14, 2015 at 5:49 PM, Ismael Juma ism...@juma.me.uk
 wrote:
  
I think this is a good idea as the change is minimal on our side and
 it
   has
been tested in production for some time by the reporter.
   
Best,
Ismael
   
On Fri, Aug 14, 2015 at 1:15 PM, Jun Rao j...@confluent.io wrote:
   
 Hi, Everyone,

 Since the release of Kafka 0.8.2.1, a number of people have
 reported
  an
 issue with snappy compression (
 https://issues.apache.org/jira/browse/KAFKA-2189). Basically, if
  they
use
 snappy in 0.8.2.1, they will experience a 2-3X space increase. The
   issue
 has since been fixed in trunk (just a snappy jar upgrade). Since
  0.8.3
   is
 still a few months away, it may make sense to do an 0.8.2.2 release
   just
to
 fix this issue. Any objections?

 Thanks,

 Jun

   
  
 
 
 
  --
  Grant Henke
  Software Engineer | Cloudera
  gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
 



Re: Official Kafka Gitter Room?

2015-07-22 Thread Stevo Slavić
On Apache Mahout project we're using Slack as well - for release
coordination. It was found that extra Slack channel does not really fit
into Apache way - it was overused, there were too many design discussions
going on there, to which community at large has no access to, was not and
could not be involved, cannot even see history. This is not the case with
user/dev mailing list, with searchable archives.

Kind regards,
Stevo Slavic.

On Wed, Jul 22, 2015 at 11:07 AM, Ismael Juma ism...@juma.me.uk wrote:

 Hi Gwen,

 On Sun, Jul 19, 2015 at 2:26 AM, Gwen Shapira gshap...@cloudera.com
 wrote:

  So, as an experiment, I created:
  https://apachekafka.slack.com
 
  I figured we'll give it a whirl for a week or two for dev discussions,
  see how it goes and if we have activity we can add this to the website
  and announce on the lists.
 

 Are people using this? If so, please send me an invite.

 Ismael



Re: Should 0.8.3 consumers correctly function with 0.8.2 brokers?

2015-07-22 Thread Stevo Slavić
I'm getting Unknown api code 11 even when both client and server are
0.8.3/trunk, when KafkaConsumer.subscribe(String... topics) is used.

Bug?

Kind regards,
Stevo Slavic.

On Fri, Apr 24, 2015 at 7:13 PM, Neha Narkhede n...@confluent.io wrote:

 Yes, I was clearly confused :-)

 On Fri, Apr 24, 2015 at 9:37 AM, Sean Lydon lydon.s...@gmail.com wrote:

  Thanks for the responses. Ewen is correct that I am referring to the
  *new* consumer (org.apache.kafka.clients.consumer.KafkaConsumer).
 
  I am extending the consumer to allow my applications more control over
  committed offsets.  I really want to get away from zookeeper (so using
  the offset storage), and re-balancing is something I haven't really
  needed to tackle in an automated/seamless way.  Either way, I'll hold
  off going further down this road until there is more interest.
 
  @Gwen
  I set up a single consumer without partition.assignment.strategy or
  rebalance.callback.class.  I was unable to subscribe to just a topic
  (Unknown api code 11 on broker), but I could subscribe to a
  topicpartition.  This makes sense as I would need to handle re-balance
  outside the consumer.  Things functioned as expected (well  I have an
  additional minor fix to code from KAFKA-2121), and the only exceptions
  on broker were due to closing consumers (which I have become
  accustomed to).  My tests are specific to my extended version of the
  consumer, but they basically do a little writing and reading with
  different serde classes with application controlled commits (similar
  to onSuccess and onFailure after each record, but with tolerance for
  out of order acknowledgements).
 
  If you are interested, here is the patch of the hack against trunk.
 
  On Thu, Apr 23, 2015 at 10:27 PM, Ewen Cheslack-Postava
  e...@confluent.io wrote:
   @Neha I think you're mixing up the 0.8.1/0.8.2 updates and the
  0.8.2/0.8.3
   that's being discussed here?
  
   I think the original question was about using the *new* consumer
  (clients
   consumer) with 0.8.2. Gwen's right, it will use features not even
   implemented in the broker in trunk yet, let alone the 0.8.2.
  
   I don't think the enable.commit.downgrade type option, or supporting
  the
   old protocol with the new consumer at all, makes much sense. You'd end
 up
   with some weird hybrid of simple and high-level consumers -- you could
  use
   offset storage, but you'd have to manage rebalancing yourself since
 none
  of
   the coordinator support would be there.
  
  
   On Thu, Apr 23, 2015 at 9:22 PM, Neha Narkhede n...@confluent.io
  wrote:
  
   My understanding is that ideally the 0.8.3 consumer should work with
 an
   0.8.2 broker if the offset commit config was set to zookeeper.
  
   The only thing that might not work is offset commit to Kafka, which
  makes
   sense since the 0.8.2 broker does not support Kafka based offset
   management.
  
   If we broke all kinds of offset commits, then it seems like a
  regression,
   no?
  
   On Thu, Apr 23, 2015 at 7:26 PM, Gwen Shapira gshap...@cloudera.com
   wrote:
  
I didn't think 0.8.3 consumer will ever be able to talk to 0.8.2
broker... there are some essential pieces that are missing in 0.8.2
(Coordinator, Heartbeat, etc).
Maybe I'm missing something. It will be nice if this will work :)
   
Mind sharing what / how you tested? Were there no errors in broker
logs after your fix?
   
On Thu, Apr 23, 2015 at 5:37 PM, Sean Lydon lydon.s...@gmail.com
   wrote:
 Currently the clients consumer (trunk) sends offset commit
 requests
  of
 version 2.  The 0.8.2 brokers fail to handle this particular
 request
 with a:

 java.lang.AssertionError: assertion failed: Version 2 is invalid
 for
 OffsetCommitRequest. Valid versions are 0 or 1.

 I was able to make this work via a forceful downgrade of this
 particular request, but I would like some feedback on whether a
 enable.commit.downgrade configuration would be a tolerable
 method
  to
 allow 0.8.3 consumers to interact with 0.8.2 brokers.  I'm also
 interested in this even being a goal worth pursuing.

 Thanks,
 Sean
   
  
  
  
   --
   Thanks,
   Neha
  
  
  
  
   --
   Thanks,
   Ewen
 



 --
 Thanks,
 Neha



Re: Should 0.8.3 consumers correctly function with 0.8.2 brokers?

2015-07-22 Thread Stevo Slavić
Jiangjie,

Seems I was misunderstood; KafakConsumer.poll after subscribing to topic
via KafkaConsumer.subscribe(String... topics) does not work, Unknown api
code 11 error, even with both client and broker being latest 0.8.3/trunk.
Will try to create a failing test and open bug report.

Kind regards,
Stevo Slavic.

On Wed, Jul 22, 2015 at 8:36 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:

 I don't think we have consumer coordinator in 0.8.2 brokers. So
 KafkaConsumer in 0.8.3 will only be able to subscribe to partitions
 explicitly. Subscribing to a topic won't work with 0.8.2 brokers.

 Jiangjie (Becket) Qin

 On Wed, Jul 22, 2015 at 4:26 AM, Stevo Slavić ssla...@gmail.com wrote:

  I'm getting Unknown api code 11 even when both client and server are
  0.8.3/trunk, when KafkaConsumer.subscribe(String... topics) is used.
 
  Bug?
 
  Kind regards,
  Stevo Slavic.
 
  On Fri, Apr 24, 2015 at 7:13 PM, Neha Narkhede n...@confluent.io
 wrote:
 
   Yes, I was clearly confused :-)
  
   On Fri, Apr 24, 2015 at 9:37 AM, Sean Lydon lydon.s...@gmail.com
  wrote:
  
Thanks for the responses. Ewen is correct that I am referring to the
*new* consumer (org.apache.kafka.clients.consumer.KafkaConsumer).
   
I am extending the consumer to allow my applications more control
 over
committed offsets.  I really want to get away from zookeeper (so
 using
the offset storage), and re-balancing is something I haven't really
needed to tackle in an automated/seamless way.  Either way, I'll hold
off going further down this road until there is more interest.
   
@Gwen
I set up a single consumer without partition.assignment.strategy or
rebalance.callback.class.  I was unable to subscribe to just a topic
(Unknown api code 11 on broker), but I could subscribe to a
topicpartition.  This makes sense as I would need to handle
 re-balance
outside the consumer.  Things functioned as expected (well  I have an
additional minor fix to code from KAFKA-2121), and the only
 exceptions
on broker were due to closing consumers (which I have become
accustomed to).  My tests are specific to my extended version of the
consumer, but they basically do a little writing and reading with
different serde classes with application controlled commits (similar
to onSuccess and onFailure after each record, but with tolerance for
out of order acknowledgements).
   
If you are interested, here is the patch of the hack against trunk.
   
On Thu, Apr 23, 2015 at 10:27 PM, Ewen Cheslack-Postava
e...@confluent.io wrote:
 @Neha I think you're mixing up the 0.8.1/0.8.2 updates and the
0.8.2/0.8.3
 that's being discussed here?

 I think the original question was about using the *new* consumer
(clients
 consumer) with 0.8.2. Gwen's right, it will use features not even
 implemented in the broker in trunk yet, let alone the 0.8.2.

 I don't think the enable.commit.downgrade type option, or
  supporting
the
 old protocol with the new consumer at all, makes much sense. You'd
  end
   up
 with some weird hybrid of simple and high-level consumers -- you
  could
use
 offset storage, but you'd have to manage rebalancing yourself since
   none
of
 the coordinator support would be there.


 On Thu, Apr 23, 2015 at 9:22 PM, Neha Narkhede n...@confluent.io
wrote:

 My understanding is that ideally the 0.8.3 consumer should work
 with
   an
 0.8.2 broker if the offset commit config was set to zookeeper.

 The only thing that might not work is offset commit to Kafka,
 which
makes
 sense since the 0.8.2 broker does not support Kafka based offset
 management.

 If we broke all kinds of offset commits, then it seems like a
regression,
 no?

 On Thu, Apr 23, 2015 at 7:26 PM, Gwen Shapira 
  gshap...@cloudera.com
 wrote:

  I didn't think 0.8.3 consumer will ever be able to talk to 0.8.2
  broker... there are some essential pieces that are missing in
  0.8.2
  (Coordinator, Heartbeat, etc).
  Maybe I'm missing something. It will be nice if this will work
 :)
 
  Mind sharing what / how you tested? Were there no errors in
 broker
  logs after your fix?
 
  On Thu, Apr 23, 2015 at 5:37 PM, Sean Lydon 
 lydon.s...@gmail.com
  
 wrote:
   Currently the clients consumer (trunk) sends offset commit
   requests
of
   version 2.  The 0.8.2 brokers fail to handle this particular
   request
   with a:
  
   java.lang.AssertionError: assertion failed: Version 2 is
 invalid
   for
   OffsetCommitRequest. Valid versions are 0 or 1.
  
   I was able to make this work via a forceful downgrade of this
   particular request, but I would like some feedback on whether
 a
   enable.commit.downgrade configuration would be a tolerable
   method
to
   allow 0.8.3 consumers to interact

Re: Failing kafka-trunk-git-pr builds now fixed

2015-07-20 Thread Stevo Slavić
Hello Ismael,

Can you please trigger the build for all of the currently opened pull
requests?

E.g. my PR https://github.com/apache/kafka/pull/85 last automatically added
comment is that the build has failed while it should have been success -
only javadocs changes are included in PR.

Kind regards,
Stevo Slavic.

On Mon, Jul 20, 2015 at 4:34 PM, Ismael Juma ism...@juma.me.uk wrote:

 Hi,

 All GitHub pull request builds were failing after we had a few successful
 ones. This should now be fixed and the same issue should not happen again.
 See the following for details:

 https://issues.apache.org/jira/browse/BUILDS-99

 Best,
 Ismael



Re: [VOTE] Drop support for Scala 2.9 for the next release

2015-07-17 Thread Stevo Slavić
+1 (non-binding)

On Fri, Jul 17, 2015 at 12:26 PM, Ismael Juma ism...@juma.me.uk wrote:

 Hi all,

 I would like to start a vote on dropping support for Scala 2.9 for the next
 release. People seemed to be in favour of the idea in previous discussions:

 * http://search-hadoop.com/m/uyzND1uIW3k2fZVfU1
 * http://search-hadoop.com/m/uyzND1KMLNK11Rmo72

 Summary of why we should drop Scala 2.9:

 * Doubles the number of builds required from 2 to 4 (2.9.1 and 2.9.2 are
 not binary compatible).
 * Code has been committed to trunk that doesn't build with Scala 2.9 weeks
 ago and no-one seems to have noticed or cared (well, I filed
 https://issues.apache.org/jira/browse/KAFKA-2325). Can we really support a
 version if we don't test it?
 * New clients library is written in Java and won't be affected. It also has
 received a lot of work and it's much improved since the last release.
 * It was released 4 years ago, it has been unsupported for a long time and
 most projects have dropped support for it (for example, we use a different
 version of ScalaTest for Scala 2.9)
 * Scala 2.10 introduced Futures and a few useful features like String
 interpolation and value classes.
 * Doesn't work with Java 8 (
 https://issues.apache.org/jira/browse/KAFKA-2203
 ).

 The reason not to drop it is to maintain compatibility for people stuck in
 2.9 who also want to upgrade both client and broker to the next Kafka
 release.

 The vote will run for 72 hours.

 +1 (non-binding) from me.

 Best,
 Ismael



Please publish Kafka snapshots regularly to Apache snapshots repo

2015-07-16 Thread Stevo Slavić
Hello Apache Kafka comitters,

Can you please have CI or some other regular (e.g. nightly) Kafka build job
running on Kafka trunk code configured to also publish Kafka Maven
artifacts to Apache snapshots repository?

E.g. latest kafka-clients snapshot on
https://repository.apache.org/content/groups/snapshots/org/apache/kafka/kafka-clients/
is 0.8.2-SNAPSHOT

Kind regards,
Stevo Slavic.


Re: [jira] [Commented] (KAFKA-2132) Move Log4J appender to a separate module

2015-07-07 Thread Stevo Slavić
Hello Jun,

I can easily reproduce the issue with previous commit (RAT).
While on latest trunk branch:

$ git checkout HEAD^
$ gradle clean
$ gradle copyDependantLibs
$ ls -lart core/build/dependant-libs-2.10.5/

lists slf4j-api-1.7.6.jar and slf4j-api-1.6.1.jar

Not sure exactly which modification in last commit did it, but issue is no
longer there as of last commit.

Thanks!

Kind regards,
Stevo Slavic.

On Tue, Jul 7, 2015 at 7:10 AM, Jun Rao j...@confluent.io wrote:

 Stevo,

 I don't see duplicated slf4j-log4j12 jars under core/build/dependant-libs
 after a clean build in trunk. If this is still an issue, could you file a
 jira and describe how to reproduce this?

 Thanks,

 Jun

 On Fri, Jun 26, 2015 at 2:19 PM, Stevo Slavić ssla...@gmail.com wrote:

  Are changes for KAFKA-2132 ticket supposed also to fix bug that core
  dependent libraries (core/build/dependant-libs) for all different
 supported
  Scala version, contain two versions of slf4j-log4j12
  (slf4j-log4j12-1.6.1.jar leaking from zookeeper 3.4.6 dependency, and
 test
  scoped slf4j-log4j12-1.7.6.jar dependency - latter is explicitly added
 for
  some reason in copyDependantLibs task, but it does not override
  slf4j-log4j12 leak from zookeeper) ?
 
  It's the source of annoying:
 
  SLF4J: Class path contains multiple SLF4J bindings.
  SLF4J: Found binding in
 
 
 [jar:file:/Users/foo/kafka/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: Found binding in
 
 
 [jar:file:/Users/foo/kafka/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
  explanation.
  SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 
  whenever a script like bin/kafka-topics.sh is run.
 
  Or should separate ticket be filed for this issue?
 
  Kind regard,
  Stevo Slavic.
 
  On Wed, Jun 24, 2015 at 7:26 PM, Ashish K Singh (JIRA) j...@apache.org
  wrote:
 
  
   [
  
 
 https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599778#comment-14599778
   ]
  
   Ashish K Singh commented on KAFKA-2132:
   ---
  
   Updated reviewboard https://reviews.apache.org/r/33614/
against branch trunk
  
Move Log4J appender to a separate module

   
Key: KAFKA-2132
URL:
 https://issues.apache.org/jira/browse/KAFKA-2132
Project: Kafka
 Issue Type: Improvement
   Reporter: Gwen Shapira
   Assignee: Ashish K Singh
Attachments: KAFKA-2132.patch,
   KAFKA-2132_2015-04-27_19:59:46.patch,
  KAFKA-2132_2015-04-30_12:22:02.patch,
   KAFKA-2132_2015-04-30_15:53:17.patch,
  KAFKA-2132_2015-06-13_21:18:59.patch,
   KAFKA-2132_2015-06-24_10:19:56.patch,
  KAFKA-2132_2015-06-24_10:25:43.patch
   
   
Log4j appender is just a producer.
Since we have a new producer in the clients module, no need to keep
   Log4J appender in core and force people to package all of Kafka with
   their apps.
Lets move the Log4jAppender to clients module.
  
  
  
   --
   This message was sent by Atlassian JIRA
   (v6.3.4#6332)
  
 



Re: [jira] [Commented] (KAFKA-2132) Move Log4J appender to a separate module

2015-07-07 Thread Stevo Slavić
Correction to my previous email — it's slf4j-log4j12-1.7.6.jar and
slf4j-log4j12-1.6.1.jar that were being copied into dependant-libs, not
slf4j-api variants.

On Tue, Jul 7, 2015 at 8:27 AM, Stevo Slavić ssla...@gmail.com wrote:

 Hello Jun,

 I can easily reproduce the issue with previous commit (RAT).
 While on latest trunk branch:

 $ git checkout HEAD^
 $ gradle clean
 $ gradle copyDependantLibs
 $ ls -lart core/build/dependant-libs-2.10.5/

 lists slf4j-api-1.7.6.jar and slf4j-api-1.6.1.jar

 Not sure exactly which modification in last commit did it, but issue is no
 longer there as of last commit.

 Thanks!

 Kind regards,
 Stevo Slavic.

 On Tue, Jul 7, 2015 at 7:10 AM, Jun Rao j...@confluent.io wrote:

 Stevo,

 I don't see duplicated slf4j-log4j12 jars under core/build/dependant-libs
 after a clean build in trunk. If this is still an issue, could you file a
 jira and describe how to reproduce this?

 Thanks,

 Jun

 On Fri, Jun 26, 2015 at 2:19 PM, Stevo Slavić ssla...@gmail.com wrote:

  Are changes for KAFKA-2132 ticket supposed also to fix bug that core
  dependent libraries (core/build/dependant-libs) for all different
 supported
  Scala version, contain two versions of slf4j-log4j12
  (slf4j-log4j12-1.6.1.jar leaking from zookeeper 3.4.6 dependency, and
 test
  scoped slf4j-log4j12-1.7.6.jar dependency - latter is explicitly added
 for
  some reason in copyDependantLibs task, but it does not override
  slf4j-log4j12 leak from zookeeper) ?
 
  It's the source of annoying:
 
  SLF4J: Class path contains multiple SLF4J bindings.
  SLF4J: Found binding in
 
 
 [jar:file:/Users/foo/kafka/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: Found binding in
 
 
 [jar:file:/Users/foo/kafka/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
  explanation.
  SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 
  whenever a script like bin/kafka-topics.sh is run.
 
  Or should separate ticket be filed for this issue?
 
  Kind regard,
  Stevo Slavic.
 
  On Wed, Jun 24, 2015 at 7:26 PM, Ashish K Singh (JIRA) j...@apache.org
 
  wrote:
 
  
   [
  
 
 https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599778#comment-14599778
   ]
  
   Ashish K Singh commented on KAFKA-2132:
   ---
  
   Updated reviewboard https://reviews.apache.org/r/33614/
against branch trunk
  
Move Log4J appender to a separate module

   
Key: KAFKA-2132
URL:
 https://issues.apache.org/jira/browse/KAFKA-2132
Project: Kafka
 Issue Type: Improvement
   Reporter: Gwen Shapira
   Assignee: Ashish K Singh
Attachments: KAFKA-2132.patch,
   KAFKA-2132_2015-04-27_19:59:46.patch,
  KAFKA-2132_2015-04-30_12:22:02.patch,
   KAFKA-2132_2015-04-30_15:53:17.patch,
  KAFKA-2132_2015-06-13_21:18:59.patch,
   KAFKA-2132_2015-06-24_10:19:56.patch,
  KAFKA-2132_2015-06-24_10:25:43.patch
   
   
Log4j appender is just a producer.
Since we have a new producer in the clients module, no need to keep
   Log4J appender in core and force people to package all of Kafka with
   their apps.
Lets move the Log4jAppender to clients module.
  
  
  
   --
   This message was sent by Atlassian JIRA
   (v6.3.4#6332)
  
 





Re: [jira] [Commented] (KAFKA-2132) Move Log4J appender to a separate module

2015-06-26 Thread Stevo Slavić
Are changes for KAFKA-2132 ticket supposed also to fix bug that core
dependent libraries (core/build/dependant-libs) for all different supported
Scala version, contain two versions of slf4j-log4j12
(slf4j-log4j12-1.6.1.jar leaking from zookeeper 3.4.6 dependency, and test
scoped slf4j-log4j12-1.7.6.jar dependency - latter is explicitly added for
some reason in copyDependantLibs task, but it does not override
slf4j-log4j12 leak from zookeeper) ?

It's the source of annoying:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/Users/foo/kafka/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/Users/foo/kafka/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

whenever a script like bin/kafka-topics.sh is run.

Or should separate ticket be filed for this issue?

Kind regard,
Stevo Slavic.

On Wed, Jun 24, 2015 at 7:26 PM, Ashish K Singh (JIRA) j...@apache.org
wrote:


 [
 https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599778#comment-14599778
 ]

 Ashish K Singh commented on KAFKA-2132:
 ---

 Updated reviewboard https://reviews.apache.org/r/33614/
  against branch trunk

  Move Log4J appender to a separate module
  
 
  Key: KAFKA-2132
  URL: https://issues.apache.org/jira/browse/KAFKA-2132
  Project: Kafka
   Issue Type: Improvement
 Reporter: Gwen Shapira
 Assignee: Ashish K Singh
  Attachments: KAFKA-2132.patch,
 KAFKA-2132_2015-04-27_19:59:46.patch, KAFKA-2132_2015-04-30_12:22:02.patch,
 KAFKA-2132_2015-04-30_15:53:17.patch, KAFKA-2132_2015-06-13_21:18:59.patch,
 KAFKA-2132_2015-06-24_10:19:56.patch, KAFKA-2132_2015-06-24_10:25:43.patch
 
 
  Log4j appender is just a producer.
  Since we have a new producer in the clients module, no need to keep
 Log4J appender in core and force people to package all of Kafka with
 their apps.
  Lets move the Log4jAppender to clients module.



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)



Re: Dropping support for Scala 2.9.x

2015-03-27 Thread Stevo Slavić
+1 for dropping 2.9.x support

Kind regards,
Stevo Slavic.

On Fri, Mar 27, 2015 at 3:20 PM, Ismael Juma mli...@juma.me.uk wrote:

 Hi all,

 The Kafka build currently includes support for Scala 2.9, which means that
 it cannot take advantage of features introduced in Scala 2.10 or depend on
 libraries that require it.

 This restricts the solutions available while trying to solve existing
 issues. I was browsing JIRA looking for areas to contribute and I quickly
 ran into two issues where this is the case:

 * KAFKA-1351: String.format is very expensive in Scala could be solved
 nicely by using the String interpolation feature introduced in Scala 2.10.

 * KAFKA-1595: Remove deprecated and slower scala JSON parser from
 kafka.consumer.TopicCount could be solved by using an existing JSON
 library, but both jackson-scala and play-json require 2.10 (argonaut
 supports Scala 2.9, but it brings other dependencies like scalaz). We can
 workaround this by writing our own code instead of using libraries, of
 course, but it's not ideal.

 Other features like Scala Futures and value classes would also be useful in
 some situations, I would think (for a more extensive list of new features,
 see

 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
 ).

 Another pain point of supporting 2.9.x is that it doubles the number of
 build and test configurations required from 2 to 4 (because the 2.9.x
 series was not necessarily binary compatible).

 A strong argument for maintaining support for 2.9.x was the client library,
 but that has been rewritten in Java.

 It's also worth mentioning that Scala 2.9.1 was released in August 2011
 (more than 3.5 years ago) and the 2.9.x series hasn't received updates of
 any sort since early 2013. Scala 2.10.0, in turn, was released in January
 2013 (over 2 years ago) and 2.10.5, the last planned release in the 2.10.x
 series, has been recently released (so even 2.10.x won't be receiving
 updates any longer).

 All in all, I think it would not be unreasonable to drop support for Scala
 2.9.x in a future release, but I may be missing something. What do others
 think?

 Ismael