Exciting! Thanks for driving the release, David.
On Mon, Jan 24, 2022 at 9:04 AM David Jacot wrote:
>
> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 3.1.0.
>
> It is a major release that includes many new features, including:
>
> * Apache Kafka supports Java
PMC.
Congratulations, David!
Gwen Shapira, on behalf of Apache Kafka PMC
ki, Thorsten Hake, Tom
> Bentley, tswstarplanet, vamossagar12, Vikas Singh, vinoth chandar, Vito
> Jeng, voffcheg109, xakassi, Xavier Léauté, Yuriy Badalyantc, Zach Zhang
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
>
> Regards,
> Bill Bejeck
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
for the 2.7 branch:
> Unit/integration tests:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/detail/kafka-2.7-jdk8/81/
>
> Thanks,
> Bill
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ses/tag/2.7.0-rc3
>
> * Documentation:
> https://kafka.apache.org/27/documentation.html
>
> * Protocol:
> https://kafka.apache.org/27/protocol.html
>
> * Successful Jenkins builds for the 2.7 branch:
> Unit/integration tests: (link to follow)
> System tests: (link to follow)
>
> Thanks,
> Bill
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
gt;
> Thanks for all the contributions, Sophie!
>
>
> Please join me to congratulate her!
> -Matthias
>
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
in the Apache Kafka community.
--
Gwen Shapira
enkins builds for the 2.6 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/101/
> System tests: (link to follow)
>
>
> Thanks,
> Randall Hauch
>
--
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Oh wow, I love this checklist. I don't think we'll have time to create one for
this release, but will be great to track this via JIRA and see if we can get
all those contributed before 2.6...
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
On Mon
Hi everyone,
I'm happy to announce that Colin McCabe, Vahid Hashemian and Manikumar
Reddy are now members of Apache Kafka PMC.
Colin and Manikumar became committers on Sept 2018 and Vahid on Jan
2019. They all contributed many patches, code reviews and participated
in many KIP discussions. We
+1 (binding)
Validated signatures, tests and ran some test workloads.
Thank you so much for driving this. Mani.
On Mon, Dec 9, 2019 at 9:32 AM Manikumar wrote:
>
> Hello Kafka users, developers and client-developers,
>
> This is the fifth candidate for release of Apache Kafka 2.4.0.
>
> This
Congratulations Mickael! Well deserved!
On Thu, Nov 7, 2019 at 1:38 PM Jun Rao wrote:
>
> Hi, Everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer Mickael
> Maison.
>
> Mickael has been contributing to Kafka since 2016. He proposed and
> implemented multiple KIPs.
David,
Why do we have two site-doc packages, one for each Scala version? It
is just HTML, right? IIRC, in previous releases we only packaged the
docs once?
Gwen
On Fri, Oct 4, 2019 at 6:52 PM David Arthur wrote:
>
> Hello all, we identified a few bugs and a dependency update we wanted to
> get
t; https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~cmccabe/kafka-2.3.0-rc3/javadoc/
> >
> > * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.3.0-rc3
> >
> > best,
> > Colin
> >
> > C.
> >
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
;
> > * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.2.1-rc1
> >
> > * Documentation:
> > https://kafka.apache.org/22/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/22/protocol.html
> >
> > * Successful Jenkins builds for the 2.2 branch:
> > Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
> >
> > Thanks!
> > --Vahid
> >
>
>
> --
>
> Thanks!
> --Vahid
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
t/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/
> System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.2/
>
> /**
>
> Thanks,
>
> -Matthias
>
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
tag:
> https://github.com/apache/kafka/releases/tag/2.1.1-rc2
>
> * Jenkins builds for the 2.1 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
>
> Thanks to everyone who tested the earlier RCs.
>
> cheers,
> Colin
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Congrats, Vahid. Thank you for all your contribution!
On Tue, Jan 15, 2019, 2:45 PM Jason Gustafson Hi All,
>
> The PMC for Apache Kafka has invited Vahid Hashemian as a project
> committer and
> we are
> pleased to announce that he has accepted!
>
> Vahid has made numerous contributions to the
* Documentation:
> http://kafka.apache.org/20/documentation.html
>
> * Protocol:
> http://kafka.apache.org/20/protocol.html
>
> * Successful Jenkins builds for the 2.0 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/177/
>
> /***
Congrats Dong Lin! Well deserved!
On Mon, Aug 20, 2018, 3:55 AM Ismael Juma wrote:
> Hi everyone,
>
> Dong Lin became a committer in March 2018. Since then, he has remained
> active in the community and contributed a number of patches, reviewed
> several pull requests and participated in
t;
> * Protocol:
>
> http://kafka.apache.org/20/protocol.html
>
>
> * Successful Jenkins builds for the 2.0 branch:
>
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/72/
>
> System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.0/
> 27/
>
>
> /**
>
>
> Thanks,
>
>
> Rajini
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
waiting for new segment to get created before
a new one is deleted.
>
> Thanks for the help!
> Simon Cooper
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
the community
discount code: KS18Comm25
Looking forward to your amazing abstracts and to see you all there.
Gwen Shapira
:
>
> https://github.com/apache/kafka/tree/1.1.0-rc4
>
>
>
> * Documentation:
>
> http://kafka.apache.org/11/documentation.html
>
>
> * Protocol:
>
> http://kafka.apache.org/11/protocol.html
>
>
>
> Thanks,
>
>
> Rajini
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
Dear Kafka Developers, Users and Fans,
Rajini Sivaram became a committer in April 2017. Since then, she remained
active in the community and contributed major patches, reviews and KIP
discussions. I am glad to announce that Rajini is now a member of the
Apache Kafka PMC.
Congratulations, Rajini
ent: December 20, 2017
-
Presentations due for initial review: March 19, 2018
-
Presentations due for final approval: April 9, 2018
I hope to see you in London! Registration will open soon!
Gwen Shapira Kafka PMC and conference enthusiast
Hi,
One super minor issue (that can be fixed without a new RC): The big
exactly-once stuff (KIP-98) doesn't actually show up as new features in the
release notes. Most chunks appear as sub-tasks, but the new feature itself
(KAFKA-4815) is marked as 0.11.1.0 so this is missing. I get that this is
Congratulations :)
On Fri, Jun 9, 2017 at 1:49 PM Vahid S Hashemian
wrote:
> Great news.
>
> Congrats Damian!
>
> --Vahid
>
>
>
> From: Guozhang Wang
> To: "d...@kafka.apache.org" ,
> "users@kafka.apache.org"
Hi,
The discussion has been quite positive, so I posted a JIRA, a PR and
updated the KIP with the latest decisions.
Lets officially vote on the KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default
JIRA is here:
(y/n) before deleting the topic (similar to how removing ACLs
> works).
>
> Thanks.
> --Vahid
>
>
>
> From: Gwen Shapira <g...@confluent.io>
> To: "d...@kafka.apache.org" <d...@kafka.apache.org>, Users
> <users@kafka.apache.org>
> Date:
Hi Kafka developers, users and friends,
I've added a KIP to improve our out-of-the-box usability a bit:
KIP-162: Enable topic deletion by default:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default
Pretty simple :) Discussion and feedback are welcome.
+1. Also not sure that adding a parameter to a CLI requires a KIP. It seems
excessive.
On Tue, May 9, 2017 at 7:57 PM Jay Kreps wrote:
> +1
> On Tue, May 9, 2017 at 3:41 PM BigData dev
> wrote:
>
> > Hi, Everyone,
> >
> > Since this is a relatively
lack-Postava, Guozhang Wang, Gwen Shapira,
Ismael Juma, Jason Gustafson, Konstantine Karantasis, Marco Ebert,
Matthias J. Sax, Michael G. Noll, Onur Karaman, Rajini Sivaram, Ryan
P, simplesteph, Vahid Hashemian
We welcome your help and feedback. For more information on how to
report problems, and to
, 2017 at 12:16 PM, Gwen Shapira <g...@confluent.io> wrote:
>
> Vote summary:
> +1: 6 (3 binding) - Eno, Ian, Guozhang, Jun, Gwen and Shimi
> 0: 0
> -1: 0
>
> W00t! 72 hours passed and we have 3 binding +1!
>
> Thank you for playing "bugfix release". See you
PM, Shimi Kiviti <shim...@gmail.com> wrote:
> +1
>
> I compiled our (Rollout.io) kafka-stream project, run unit-tests and
> end-to-end tests (against streams 0.10.2.1 and broker 0.10.1.1)
> Everything works as expected
>
> On Wed, Apr 26, 2017 at 10:05 PM, Gwen Shapir
e jars so I can test it against our kafka
> >> streams services?
> >>
> >> On Sat, Apr 22, 2017 at 9:05 PM, Eno Thereska <eno.there...@gmail.com>
> <eno.there...@gmail.com>
> >> wrote:
> >>
> >>
> >> +1 tested the usual streams t
n of original avro messages into kafka connect
> format and back in a source connector?
>
> Thank you,
> Stanislav.
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
are pleased to announce that she has accepted!
Rajini contributed 83 patches, 8 KIPs (all security and quota
improvements) and a significant number of reviews. She is also on the
conference committee for Kafka Summit, where
the door already :P
Thanks,
Gwen
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
Hello Kafka users, developers and client-developers,
This is the third candidate for release of Apache Kafka 0.10.2.1.
It is a bug fix release, so we have lots of bug fixes, some super
important.
Release notes for the 0.10.2.1 release:
. I'm inclined to roll another RC for 0.10.2.1 that
includes this patch.
If there are no objections, I will roll out another RC and re-initiate the
vote. Thank you everyone for your patience. Less bugs is good for all of us
:)
On Wed, Apr 12, 2017 at 5:25 PM, Gwen Shapira <g...@confluent
Verified my own signatures, ran quickstart and created few Connectors.
+1 (binding)
On Wed, Apr 12, 2017 at 5:25 PM, Gwen Shapira <g...@confluent.io> wrote:
> Hello Kafka users, developers, client-developers, friends, romans,
> citizens, etc,
>
> This is the second can
:
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=e133f2ca57670e77f8114cc72dbc2f91a48e3a3b
* Documentation:
http://kafka.apache.org/0102/documentation.html
* Protocol:
http://kafka.apache.org/0102/protocol.html
/**
Thanks,
Gwen Shapira
t;> I ran the quickstart steps against the 2.11 binary. Everything worked fine
>>> +1
>>>
>>> On Wed, Apr 12, 2017 at 8:53 AM, Michal Borowiecki <
>>> michal.borowie...@openbet.com> wrote:
>>>
>>>> FWIW, I upgraded without issue
Wrong link :)
http://kafka.apache.org/documentation/#upgrade
and
http://kafka.apache.org/documentation/streams#streams_api_changes_0102
On Tue, Apr 11, 2017 at 5:57 PM, Gwen Shapira <g...@confluent.io> wrote:
> FYI: I just updated the upgrade notes with Streams changes:
> http://kafk
FYI: I just updated the upgrade notes with Streams changes:
http://kafka.apache.org/documentation/#gettingStarted
On Fri, Apr 7, 2017 at 5:12 PM, Gwen Shapira <g...@confluent.io> wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate
way, may be worthwhile to start a different discussion thread
about RC releases in Maven. Perhaps more knowledgable people will see
it and jump in.
Gwen
On Tue, Apr 11, 2017 at 4:31 PM, Steven Schlansker
<sschlans...@opentable.com> wrote:
>
>> On Apr 7, 2017, at 5:12 PM,
Thank you for testing!!!
On Mon, Apr 10, 2017 at 7:36 AM, Mathieu Fenniak
<mathieu.fenn...@replicon.com> wrote:
> Hi Gwen,
>
> +1, looks good to me. Tested broker upgrades, and connect & streams
> applications.
>
> Mathieu
>
>
> On Fri, Apr 7, 2017 at 6:1
e.org/0102/protocol.html
Thanks,
Gwen Shapira
ssing something ?
>
> On 23 February 2017 at 9:21:08 am, Gwen Shapira (g...@confluent.io) wrote:
>
> I saw them in Maven yesterday?
>
> On Wed, Feb 22, 2017 at 2:15 PM, Stephane Maarek
> <steph...@simplemachines.com.au> wrote:
> > Awesome thanks a lot! When sh
; Cosentino, Andrew Olson, Andrew Stevenson, Anton Karamanov, Antony
>> Stubbs, Apurva Mehta, Arun Mahadevan, Ashish Singh, Balint Molnar, Ben
>> Stopford, Bernard Leach, Bill Bejeck, Colin P. Mccabe, Damian Guy, Dan
>> Norwood, Dana Powers, dasl, Derrick Or, Dong Lin, Dustin Cote,
ll pass easily :)
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-121%3A+Add+KStream+peek+method
>
> I believe the PR attached is already in good shape to consider merging:
>
> https://github.com/apache/kafka/pull/2493
>
> Thanks!
> Steven
>
--
Gwen Shapira
Produ
Just to clarify, we'll need to allow specifying topic and partition. I
don't think we want this on ALL partitions at once.
On Wed, Feb 8, 2017 at 3:35 PM, Gwen Shapira <g...@confluent.io> wrote:
> That's what I'd like to see. For example, suppose a Connect task fails
> beca
ion and --reset-file (path to JSON)
>>
>> Reset based on file
>>
>> 4. Only with --verify option and --reset-file (path to JSON)
>>
>> Verify file values with current offsets
>>
>> I think we can remove --generate-and-execute because is a bit clums
gt; running, such that consumer will seek to the newly committed offset and
> start consuming from there?
>
> Not sure about this. I will recommend to keep it simple and ask user to
> stop consumers first. But I would considered it if the trade-offs are
> clear.
>
> @Matthias
>
> Ad
gt; On Tue, Feb 7, 2017 at 10:24 PM, Gwen Shapira <g...@confluent.io> wrote:
>
>> Thanks for the KIP. I'm super happy about adding the capability.
>>
>> I hate the interface, though. It looks exactly like the replica
>> assignment tool. A tool everyone loves s
:
> Hi all,
>
> I would like to propose a KIP to Add a tool to Reset Consumer Group Offsets.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-122%3A+Add+a+tool+to+Reset+Consumer+Group+Offsets
>
> Please, take a look at the proposal and share your feedback.
>
> Tha
ot;test id=" + id + " command=" + command);
> command.setId(9);
>
> return new KeyValue<>(UUID.randomUUID().toString(), command);
> })
> .through(Serdes.String(), testSpecificAvroSerde, "test2");
>
>
> *test.avsc*
> {
> "type":
ple things, like how many records have been
>>>> currently processed. The peek method would allow those kinds of
>>> diagnostics
>>>> and debugging.
>>>>
>>>> Gwen, it is possible to do this with the existing functionality like map,
>>&g
h is
> important to achieve.
>
> Below is the probable avro schema(schema.txt) for reference(actually v
> complex what is available to process):
>
> {
> "type" : "record",
> "namespace" : "mynamespace",
> "name" : "myname",
> "fields" : [{
> "name":"field1",
> "type":{
> "type":"record",
> "name":"Eventfield1",
> "fields":[{.}]
> }]
> ]
> }
>
> Please help to implement the same.
>
> Regards,
> Kush
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
t;
>> Please consider my contribution and hopefully you all like it and agree that
>> it should be merged into 0.10.3 :)
>> If not, be gentle, this is my first KIP!
>>
>> Happy Monday,
>> Steven
>>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
s KIP focus on one thing.
>
> As mentioned in a previous reply, we plan to have at least one more KIP
> to clean up DSL -- this future KIP should include exact this change.
>
>
> -Matthias
>
>
> On 2/6/17 4:26 PM, Gwen Shapira wrote:
>> I like the cleanup a lot :)
>
ri, Feb 3, 2017 at 3:33 PM, Matthias J. Sax <matth...@confluent.io> wrote:
> Hi All,
>
> I did prepare a KIP to do some cleanup some of Kafka's Streaming API.
>
> Please have a look here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-120%3A+Cleanup+Kafka+Streams+builder+API
&
that
>>> process.
>>>>> I
>>>>>> might need to do the clean-up as part of the Connect code instead, or
>>>>> there
>>>>>> is a better way of doing that?
>>>>>>
>>>>>> Thanks,
>>>>>> Eric
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Jan 29, 2017 at 4:37 PM, Matthias J. Sax <
>>> matth...@confluent.io>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> currently, a Kafka Streams application is designed to "run forever"
>>> and
>>>>>>> there is no notion of "End of Batch" -- we have plans to add this
>>>>>>> though... (cf.
>>>>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>>>>>> 95%3A+Incremental+Batch+Processing+for+Kafka+Streams)
>>>>>>>
>>>>>>> Thus, right now you need to stop your application manually. You would
>>>>>>> need to observe the application's committed offsets (and lag) using
>>>>>>> bin/kafka-consumer-groups.sh (the application ID is user as group ID)
>>> to
>>>>>>> monitor the app's progress to see when it is done.
>>>>>>>
>>>>>>> Cf.
>>>>>>> https://cwiki.apache.org/confluence/display/KAFKA/
>>>>>>> Kafka+Streams+Data+%28Re%29Processing+Scenarios
>>>>>>>
>>>>>>>
>>>>>>> -Matthias
>>>>>>>
>>>>>>>
>>>>>>> On 1/28/17 1:07 PM, Eric Dain wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I'm pretty new to Kafka Streams. I am using Kafka Streams to ingest
>>>>> large
>>>>>>>> csv file. I need to run some clean-up code after all records in the
>>>>> file
>>>>>>>> are processed. Is there a way to send "End of Batch" event that is
>>>>>>>> guaranteed to be processed after all records? If not is there
>>>>> alternative
>>>>>>>> solution?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Eric
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
he list of support model with cost per year?
>
> Thanks
> Lincu
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ooking forward to your feedback.
>
>
> -Matthias
>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
e and we do not need to calculate joins twice (one more time
>>>>>>>>>
>>>>>>>> when
>>>>>>
>>>>>>> old value is received).
>>>>>>>>>
>>>>>>>>> 3. I'm wondering if it is worth-while to add a "KStream#toTable()"
>>>>>>>>>
>>>>>>>> function
>>>>>>>>
>>>>>>>>> which is interpreted as a dummy-aggregation where the new value
>>>>>>>>> always
>>>>>>>>> replaces the old value. I have seen a couple of use cases of this,
>>>>>>>>> for
>>>>>>>>> example, users want to read a changelog topic, apply some filters,
>>>>>>>>> and
>>>>>>>>>
>>>>>>>> then
>>>>>>>>
>>>>>>>>> materialize it into a KTable with state stores without creating
>>>>>>>>>
>>>>>>>> duplicated
>>>>>>>>
>>>>>>>>> changelog topics. With materialize() and toTable I'd imagine users
>>>>>>>>> can
>>>>>>>>> specify sth. like:
>>>>>>>>>
>>>>>>>>> "
>>>>>>>>> KStream stream = builder.stream("topic1").filter(..);
>>>>>>>>> KTable table = stream.toTable(..);
>>>>>>>>> table.materialize("state1");
>>>>>>>>> "
>>>>>>>>>
>>>>>>>>> And the library in this case could set store "state1" 's changelog
>>>>>>>>>
>>>>>>>> topic
>>>>>>
>>>>>>> to
>>>>>>>>
>>>>>>>>> be "topic1", and applying the filter on the fly while (re-)storing
>>>>>>>>> its
>>>>>>>>> state by reading from this topic, instead of creating a second
>>>>>>>>>
>>>>>>>> changelog
>>>>>>
>>>>>>> topic like "appID-state1-changelog" which is a semi-duplicate of
>>>>>>>>>
>>>>>>>> "topic1".
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Detailed:
>>>>>>>>>
>>>>>>>>> 1. I'm +1 with Michael regarding "#toStream"; actually I was
>>>>>>>>> thinking
>>>>>>>>>
>>>>>>>> about
>>>>>>>>
>>>>>>>>> renaming to "#toChangeLog" but after thinking a bit more I think
>>>>>>>>>
>>>>>>>> #toStream
>>>>>>>>
>>>>>>>>> is still better, and we can just mention in the javaDoc that it is
>>>>>>>>> transforming its underlying changelog stream to a normal stream.
>>>>>>>>> 2. As Damian mentioned, there are a few scenarios where the serdes
>>>>>>>>> are
>>>>>>>>> already specified in a previous operation whereas it is not known
>>>>>>>>>
>>>>>>>> before
>>>>>>
>>>>>>> calling materialize, for example:
>>>>>>>>> stream.groupByKey.agg(serde).materialize(serde) v.s.
>>>>>>>>>
>>>>>>>> table.mapValues(/*no
>>>>>>
>>>>>>> serde specified*/).materialize(serde). We need to specify what are
>>>>>>>>> the
>>>>>>>>> handling logic here.
>>>>>>>>> 3. We can remove "KTable#to" call as well, and enforce users to
>>>>>>>>> call "
>>>>>>>>> KTable.toStream.to" to be more clear.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Guozhang
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Jan 18, 2017 at 3:22 AM, Eno Thereska <
>>>>>>>>> eno.there...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> I think changing it to `toKStream` would make it absolutely clear
>>>>>>>>>> what
>>>>>>>>>>
>>>>>>>>> we
>>>>>>>>
>>>>>>>>> are converting it to.
>>>>>>>>>>
>>>>>>>>>> I'd say we should probably change the KStreamBuilder methods (but
>>>>>>>>>> not
>>>>>>>>>>
>>>>>>>>> in
>>>>>>
>>>>>>> this KIP).
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>> Eno
>>>>>>>>>>
>>>>>>>>>> On 17 Jan 2017, at 13:59, Michael Noll <mich...@confluent.io>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Rename toStream() to toKStream() for consistency.
>>>>>>>>>>>>
>>>>>>>>>>> Not sure whether that is really required. We also use
>>>>>>>>>>> `KStreamBuilder#stream()` and `KStreamBuilder#table()`, for
>>>>>>>>>>> example,
>>>>>>>>>>>
>>>>>>>>>> and
>>>>>>>>
>>>>>>>>> don't care about the "K" prefix.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Jan 17, 2017 at 10:55 AM, Eno Thereska <
>>>>>>>>>>>
>>>>>>>>>> eno.there...@gmail.com
>>>>>>
>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Thanks Damian, answers inline:
>>>>>>>>>>>>
>>>>>>>>>>>> On 16 Jan 2017, at 17:17, Damian Guy <damian@gmail.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Eno,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks for the KIP. Some comments:
>>>>>>>>>>>>>
>>>>>>>>>>>>> 1. I'd probably rename materialized to materialize.
>>>>>>>>>>>>>
>>>>>>>>>>>> Ok.
>>>>>>>>>>>>
>>>>>>>>>>>> 2. I don't think the addition of the new Log compaction mechanism
>>>>>>>>>>>>>
>>>>>>>>>>>> is
>>>>>>
>>>>>>> necessary for this KIP, i.e, the KIP is useful without it. Maybe
>>>>>>>>>>>>>
>>>>>>>>>>>> that
>>>>>>>>
>>>>>>>>> should be a different KIP?
>>>>>>>>>>>>>
>>>>>>>>>>>> Agreed, already removed. Will do a separate KIP for that.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 3. What will happen when you call materialize on KTable that is
>>>>>>>>>>>>>
>>>>>>>>>>>> already
>>>>>>>>>>
>>>>>>>>>>> materialized? Will it create another StateStore (providing the
>>>>>>>>>>>>>
>>>>>>>>>>>> name
>>>>>>
>>>>>>> is
>>>>>>>>
>>>>>>>>> different), throw an Exception?
>>>>>>>>>>>>>
>>>>>>>>>>>> Currently an exception is thrown, but see below.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 4. Have you considered overloading the existing KTable operations
>>>>>>>>>>>>>
>>>>>>>>>>>> to
>>>>>>
>>>>>>> add
>>>>>>>>>>>>
>>>>>>>>>>>>> a state store name? So if a state store name is provided, then
>>>>>>>>>>>>>
>>>>>>>>>>>> materialize
>>>>>>>>>>>>
>>>>>>>>>>>>> a state store? This would be my preferred approach as i don't
>>>>>>>>>>>>>
>>>>>>>>>>>> think
>>>>>>
>>>>>>> materialize is always a valid operation.
>>>>>>>>>>>>>
>>>>>>>>>>>> Ok I can see your point. This will increase the KIP size since
>>>>>>>>>>>> I'll
>>>>>>>>>>>>
>>>>>>>>>>> need
>>>>>>>>
>>>>>>>>> to enumerate all overloaded methods, but it's not a problem.
>>>>>>>>>>>>
>>>>>>>>>>>> 5. The materialize method will need ta value Serde as some
>>>>>>>>>>>>>
>>>>>>>>>>>> operations,
>>>>>>>>
>>>>>>>>> i.e., mapValues, join etc can change the value types
>>>>>>>>>>>>> 6. https://issues.apache.org/jira/browse/KAFKA-4609 - might
>>>>>>>>>>>>> mean
>>>>>>>>>>>>>
>>>>>>>>>>>> that
>>>>>>>>
>>>>>>>>> we
>>>>>>>>>>>>
>>>>>>>>>>>>> always need to materialize the StateStore for KTable-KTable
>>>>>>>>>>>>> joins.
>>>>>>>>>>>>>
>>>>>>>>>>>> If
>>>>>>>>
>>>>>>>>> that
>>>>>>>>>>>>
>>>>>>>>>>>>> is the case, then the KTable Join operators will also need Serde
>>>>>>>>>>>>> information.
>>>>>>>>>>>>>
>>>>>>>>>>>> I'll update the KIP with the serdes.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> Eno
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Cheers,
>>>>>>>>>>>>> Damian
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, 16 Jan 2017 at 16:44 Eno Thereska <
>>>>>>>>>>>>> eno.there...@gmail.com>
>>>>>>>>>>>>>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hello,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> We created "KIP-114: KTable materialization and improved
>>>>>>>>>>>>>>
>>>>>>>>>>>>> semantics"
>>>>>>
>>>>>>> to
>>>>>>>>
>>>>>>>>> solidify the KTable semantics in Kafka Streams:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>>>>>>>>>>>>>
>>>>>>>>>>>>> 114%3A+KTable+materialization+and+improved+semantics
>>>>>>>>>>>>
>>>>>>>>>>>>> <
>>>>>>>>>>>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>>>>>>>>>>>>>
>>>>>>>>>>>>> 114:+KTable+materialization+and+improved+semantics
>>>>>>>>>>>>
>>>>>>>>>>>>> Your feedback is appreciated.
>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>> Eno
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
__
>>>>
>>>
>>>
>>>
>>>
>>>
>>> This message is for the designated recipient only and may contain
>>> privileged, proprietary, or otherwise confidential information. If you have
>>> received it in error, please notify the sender immediately and delete the
>>> original. Any other use of the e-mail by you is prohibited. Thank you in
>>> advance for your cooperation.
>>>
>>>
>>
>>
>>
>>
>>
>>
>> This message is for the designated recipient only and may contain
>> privileged, proprietary, or otherwise confidential information. If you have
>> received it in error, please notify the sender immediately and delete the
>> original. Any other use of the e-mail by you is prohibited. Thank you in
>> advance for your cooperation.
>>
>>
>>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
on the Admin protocol. Throughout this, he
displayed great technical judgment, high-quality work and willingness
to contribute where needed to make Apache Kafka awesome.
Thank you for your contributions, Grant :)
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
nt me to configuration examples where this has been achieved?
>
> Regards,
> Abhishek
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
r.File=${kafka.logs.dir}/connect.log
> log4j.appender.connectAppender.layout=org.apache.log4j.PatternLayout
> log4j.appender.connectAppender.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
>
> Thank you!
> Eric
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
nd json. The id column is basically topic+partition+offset (to
>> guarantee idempotence on upserts), and the json column is basically the
>> json document
>>
>> Is that feasible using the out of the box JDBC connector? I didn’t see any
>> support for “json type” fields
>>
>> Thanks,
>> Stephane
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
SinkTask, I get the SinkRecord that contains both key and
> value.
>
> Can some one suggest/outline the general guidelines for keys to be used
> with K-V store from the SinkRecord.
>
> What should be the key for external K-V store to be used to store a records
> from kafka topics to exte
/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+use+Leader+Epoch+rather+than+High+Watermark+for+Truncation
>
> <https://cwiki.apache.org/confluence/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+use+Leader+Epoch+rather+than+High+Watermark+for+Truncation>
>
> Ple
>> We will formally add the scala 2.12 support in future minor releases.
>>
>>
>> * Javadoc:
>> http://home.apache.org/~guozhang/kafka-0.10.1.1-rc1/javadoc/
>>
>> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>> c3638376708ee6c02dfe4e57747acae0126fa6e7
>>
>>
>> Thanks,
>> Guozhang
>>
>> --
>> -- Guozhang
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
is waiting for your abstracts :)
--
Gwen Shapira
* Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=c3638376708ee6c02dfe4e57747acae0126fa6e7
>
>
> Thanks,
> Guozhang
>
> --
> -- Guozhang
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ctoring:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+
> Documentation+Changes
>
>
> We are trying to do the same for Connect, Ops, Configs, APIs etc in the
> near future. Any comments, improvements, and contributions are welcome and
> encouraged.
>
>
&
Is there a chance to fix this such
> that a restart within the heartbeat interval does not lead to a re-balance?
> Would a well defined client.id help?
>
> Regards
> Harald
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
entire message into memory. We don’t
>> want to presume any particular message size, and may not want to cache
>> the entire message in memory while processing it. Is there an
>> interface where we can consume messages via a stream, so that we can
>> read chunks of a message and process them based on some kind of batch
>> size that will allow us better control over memory usage?
>>
>>
>>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
mediately by electronic mail and delete this message and
> all copies and backups thereof. Thank you. Greenway Health.
> This e-mail and any files transmitted with it are confidential, may contain
> sensitive information, and are intended solely for the use of the individual
> or
me here with a little technical debt if the costs weren't too high. If
> there are major issues then I can take on the client upgrade as well.
>
> Thanks in advance!
>
> --
>
> In Christ,
>
> Timmy V.
>
> http://blog.twonegatives.com/
> http://five.sentenc.es/ --
Hey Kafka Community,
I'm trying to take a pulse on the current state of the Kafka clients ecosystem.
Which languages are most popular in our community? What does the
community value in clients?
You can help me out by filling in the survey:
https://goo.gl/forms/cZg1CJyf1PuqivTg2
I will lock the
or if this is available somwhere on your
> website.
>
> Greatly appreciated!
>
> Costa Tsirbas
> 514.443.1439
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
confidential, may contain
> sensitive information, and are intended solely for the use of the individual
> or entity to whom they are addressed. If you have received this e-mail in
> error, please notify the sender by reply e-mail immediately and destroy all
> copies of the e-mail and any attachments.
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
ilable as a sticky note, but I could not find it.
>
> Thanks.
>
> --
> Raghav
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
Thank you, Vahid!
On Wed, Nov 16, 2016 at 1:53 PM, Vahid S Hashemian
<vahidhashem...@us.ibm.com> wrote:
> I'll open a JIRA.
>
> Andrew, let me know if you want to take over the implementation.
> Otherwise, I'd be happy to work on it.
>
> Thanks.
> --Vahid
>
&
pful, but a direct --group flag
> would be a simpler user interface for this common use case.
>
>
> --
> Cheers,
> Andrew
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
U Hua B <hua.b@alcatel-lucent.com
>> <javascript:;>>
>> wrote:
>>
>> > Hi,
>> >
>> >
>> > For a rolling upgrade, Kafka suggest upgrade the brokers one at a time
>> > (shut down the broker, update the code, and restart it) to avoid
>> > downtime during the upgrade.
>> > Usually, there is one zookeeper point to some brokers in a Kafka
>> > cluster, if the zookeeper should be upgraded also? If so, how to avoid
>> > downtime during zookeeper upgrade? Thanks!
>> >
>> >
>> >
>> >
>> >
>> >
>> > Best Regards
>> >
>> > Johnny
>> >
>> >
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
other solutions of gracefully handling this instead of
> restarting the brokers?
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
rRebalanceListener
> would be sufficient.
>
> ____
> From: Gwen Shapira <g...@confluent.io>
> Sent: Monday, November 07, 2016 3:34:39 PM
> To: Users
> Subject: Re: consumer client pause/resume/rebalance
>
> I think the current behavior is fairly reasonable. Foll
// WARNING: if there is a rebalance, this call may return some records!!!
> consumer.poll(0);
> Uninterruptibles.sleepUninterruptibly(pauseWait, TimeUnit.MILLISECONDS);
> }
>
> consumer.resume(consumer.assignment().toArray(EMPTYTPARRAY));
>
>
> Thanks,
>
> Paul
>
>
>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
vroSchema, compressionCodecName,
> blockSize, pageSize);
>
>
>
>
> From: Henry Kim
> Sent: Wednesday, November 2, 2016 2:46:27 PM
> To: users@kafka.apache.org
> Subject: HDFS Connector Compression?
>
>
> Is it possible to add compression to the HDFS Connecto
and painlessly manage
> real-time data pipelines on Apache Kafka.
>
> Thx!!
> Kenny Gorman
> Founder
> www.eventador.io
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
witter : @ppatierno<http://twitter.com/ppatierno>
> Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
> Blog : DevExperience<http://paolopatierno.wordpress.com/>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
to have you on board as a committer and look forward to your
> continued participation!
>
> Joel
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
Oops. Sorry, didn't notice the 72h voting period has passed. You can
disregard.
Gwen
On Sat, Oct 29, 2016 at 4:29 PM, Gwen Shapira <g...@confluent.io> wrote:
> -1
>
> Kafka's development model is a good fit for critical path and
> well-established APIs. It doesn't work a
T Server implementation in Apache Kafka.
>
> Thanks,
> Harsha
>
--
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
<http://www.confluent.io/blog>
it : (
>> > > > device-connection-invert-key-value-the) which obviously it doesn't
>> > find.
>> > > >
>> > > > Does some body have a wolution to delete it ?
>> > > >
>> > > > Thanks in advance.
>> > > >
>> > > >
>> > > > Hamza
>> > > >
>> > > >
>> > >
>> >
>> > --
>> >
>> >
>> > This email, including attachments, is private and confidential. If you
>> have
>> > received this email in error please notify the sender and delete it from
>> > your system. Emails are not secure and may contain viruses. No liability
>> > can be accepted for viruses that might be transferred by this email or
>> any
>> > attachment. Any unauthorised copying of this message or unauthorised
>> > distribution and publication of the information contained herein are
>> > prohibited.
>> >
>> > 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
>> > Registered in England and Wales. Registered No. 04843573.
>> >
>>
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
e is no public
> benchmarking for 10GBe, I'd be happy to run benchmarks /publish results on
> this hardware if we can get it tuned up properly.
>
> What kind of broker/producer/consumer settings would you recommend?
>
> Thanks!
> - chris
--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog
1 - 100 of 488 matches
Mail list logo