Re: Re: kafka streams coordinator error

2018-03-29 Thread 杰 杨
there is no errors in that time broker server.logs


funk...@live.com

From: Sameer Kumar
Date: 2018-03-30 13:33
To: users
Subject: Re: kafka streams coordinator error
Check Kafka broker logs as well, see if there some error there.

On Fri, Mar 30, 2018, 10:57 AM ? ?  wrote:

> Hi:
> I used kafka streams for days.
> and I meet a problem today.when I test 2400W data in kafka though kafka
> streaming then write datas to HDFS .
> I found the final results is bigger then 2400W.and in console I found the
> error.
>
>  Offset commit failed on partition -2 at offset 63514037: The
> coordinator is not aware of this member
>
> kafka commit in internal topic.
> I set max.poll.record is 300.and I check max.poll.interval.ms is default
> 30
> and I check one record need 1 ms to be processed .
> I wonder why throw this error ?
>
> write
> 
>
> funk...@live.com
>


Re: kafka streams coordinator error

2018-03-29 Thread Sameer Kumar
Check Kafka broker logs as well, see if there some error there.

On Fri, Mar 30, 2018, 10:57 AM ? ?  wrote:

> Hi:
> I used kafka streams for days.
> and I meet a problem today.when I test 2400W data in kafka though kafka
> streaming then write datas to HDFS .
> I found the final results is bigger then 2400W.and in console I found the
> error.
>
>  Offset commit failed on partition -2 at offset 63514037: The
> coordinator is not aware of this member
>
> kafka commit in internal topic.
> I set max.poll.record is 300.and I check max.poll.interval.ms is default
> 30
> and I check one record need 1 ms to be processed .
> I wonder why throw this error ?
>
> write
> 
>
> funk...@live.com
>


kafka streams coordinator error

2018-03-29 Thread ? ?
Hi:
I used kafka streams for days.
and I meet a problem today.when I test 2400W data in kafka though kafka 
streaming then write datas to HDFS .
I found the final results is bigger then 2400W.and in console I found the error.

 Offset commit failed on partition -2 at offset 63514037: The coordinator 
is not aware of this member

kafka commit in internal topic.
I set max.poll.record is 300.and I check max.poll.interval.ms is default  30
and I check one record need 1 ms to be processed .
I wonder why throw this error ?

write


funk...@live.com


Is Restart needed after change in trust store for Kafka 1.1 ?

2018-03-29 Thread Raghav
Hi

We have a 3 node Kafka cluster running. Time to time, we have some changes
in trust store and we restart Kafka to take new changes into account. We
are on Kafka 0.10.x.

If we move to 1.1, would we still need to restart Kafka upon trust store
changes ?

Thanks.

-- 
Raghav


partition replication vs. partition clustering ?

2018-03-29 Thread Victor L
Can someone clarify difference between partition replication and partition
replication? Or, are they referring to the same thing?


Re: Kafka Stream - Building KTable in Kafka 1.0.0

2018-03-29 Thread Guozhang Wang
Hello Cédric,

Your observation is correct, and I think we have some obsoleted docs that
we need to fix. In KIP-182 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-182%3A+Reduce+Streams+DSL+overloads+and+allow+easier+use+of+custom+storage+engines)
we are effectively materializing all state stores with a changelog enabled
by default, but the javadocs have not be updated yet.

We are working on some topology optimization techniques in the near future
to re-enable such optimizations now:
https://issues.apache.org/jira/browse/KAFKA-6034.


Guozhang


On Thu, Mar 29, 2018 at 7:52 AM, Cedric BERTRAND <
bertrandcedric@gmail.com> wrote:

> Hello,
>
> In the new api 1.0.0 for building KTable, it is written that No internal
> changelod topic is created.
>
> public  KTable
>  streams/kstream/KTable.html>
> table(java.lang.String topic)
>
> Create a KTable
>  streams/kstream/KTable.html>
> for
> the specified topic. The default "auto.offset.reset" strategy and default
> key and value deserializers as specified in the config
>  streams/StreamsConfig.html>
> are
> used. Input records
>  streams/KeyValue.html>
>  with null key will be dropped.
>
> Note that the specified input topics must be partitioned by key. If this is
> not the case the returned KTable
>  streams/kstream/KTable.html>
> will
> be corrupted.
>
> The resulting KTable
>  streams/kstream/KTable.html>
> will
> be materialized in a local KeyValueStore
>  streams/state/KeyValueStore.html>
> with
> an internal store name. Note that that store name may not be queriable
> through Interactive Queries. *No internal changelog topic is created since
> the original input topic can be used for recovery (cf. methods
> of KGroupedStream
>  KGroupedStream.html>
> and KGroupedTable
>  streams/kstream/KGroupedTable.html>
> that
> return a KTable
>  streams/kstream/KTable.html>).*
> Parameters:topic - the topic name; cannot be nullReturns:a KTable
>  streams/kstream/KTable.html>
> for
> the specified topic
>
> My code is as followed :KTable table = builder.table("my_topic");
>
> When I look at the created topics I can see an internal topic
> "application_id-my_topicSTATE-STORE-02-changelog".
> Do I missed something ?
> Thanks,
> Cédric
>



-- 
-- Guozhang


Re: 答复: [ANNOUNCE] New Committer: Dong Lin

2018-03-29 Thread Chen Zhu
Congratulations, Dong!

On Wed, Mar 28, 2018 at 7:04 PM, Hu Xi  wrote:

> Congrats, Dong Lin!
>
>
> 
> 发件人: Matthias J. Sax 
> 发送时间: 2018年3月29日 6:37
> 收件人: users@kafka.apache.org; d...@kafka.apache.org
> 主题: Re: [ANNOUNCE] New Committer: Dong Lin
>
> Congrats!
>
> On 3/28/18 1:16 PM, James Cheng wrote:
> > Congrats, Dong!
> >
> > -James
> >
> >> On Mar 28, 2018, at 10:58 AM, Becket Qin  wrote:
> >>
> >> Hello everyone,
> >>
> >> The PMC of Apache Kafka is pleased to announce that Dong Lin has
> accepted
> >> our invitation to be a new Kafka committer.
> >>
> >> Dong started working on Kafka about four years ago, since which he has
> >> contributed numerous features and patches. His work on Kafka core has
> been
> >> consistent and important. Among his contributions, most noticeably, Dong
> >> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> >> overall cost, added deleteDataBefore() API (KIP-107) to allow users
> >> actively remove old messages. Dong has also been active in the
> community,
> >> participating in KIP discussions and doing code reviews.
> >>
> >> Congratulations and looking forward to your future contribution, Dong!
> >>
> >> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> >
>
>


Re: Two instances of Kafka consumer reading same partition

2018-03-29 Thread Narayan Periwal
Hi,

We had one more of such an issue, and looks like such issues are coming
more frequently, whenever there is some issue in the kafka cluster.
This time, I could retrieve both the server and client side logs, have
added the detail in the same ticket - KAFKA-6681


Any help will be highly appreciated.

Thanks,
Narayan

On Mon, Mar 19, 2018 at 6:45 PM, Narayan Periwal  wrote:

> Hi,
>
> We are facing an issue with the Kafka consumer,  the new library that got
> introduced in 0.9
>
> We are using Kafka broker 0.10.2.1 and consumer client version is also
> 0.10.2.1
>
> The issue that we have faced is that, after rebalancing, some of the
> partitions gets consumed by 2 instances within a consumer group, leading to
> duplication of the entire partition data. Both the instances continue to
> read the same partition until the next rebalancing, or the restart of those
> clients.
>
> Have incorporated the details in this ticket - KAFKA-6681
> 
>
> Please look at it the earliest
>
> Regards,
> Narayan
>



-- 
Thanks,
Narayan

-- 
_
The information contained in this communication is intended solely for the 
use of the individual or entity to whom it is addressed and others 
authorized to receive it. It may contain confidential or legally privileged 
information. If you are not the intended recipient you are hereby notified 
that any disclosure, copying, distribution or taking any action in reliance 
on the contents of this information is strictly prohibited and may be 
unlawful. If you have received this communication in error, please notify 
us immediately by responding to this email and then delete it from your 
system. The firm is neither liable for the proper and complete transmission 
of the information contained in this communication nor for any delay in its 
receipt.


Re: [ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread James Cheng
Thanks Damian and Rajini for running the release! Congrats and good job 
everyone!

-James

Sent from my iPhone

> On Mar 29, 2018, at 2:27 AM, Rajini Sivaram  wrote:
> 
> The Apache Kafka community is pleased to announce the release for
> 
> Apache Kafka 1.1.0.
> 
> 
> Kafka 1.1.0 includes a number of significant new features.
> 
> Here is a summary of some notable changes:
> 
> 
> ** Kafka 1.1.0 includes significant improvements to the Kafka Controller
> 
>   that speed up controlled shutdown. ZooKeeper session expiration edge
> cases
> 
>   have also been fixed as part of this effort.
> 
> 
> ** Controller improvements also enable more partitions to be supported on a
> 
>   single cluster. KIP-227 introduced incremental fetch requests, providing
> 
>   more efficient replication when the number of partitions is large.
> 
> 
> ** KIP-113 added support for replica movement between log directories to
> 
>   enable data balancing with JBOD.
> 
> 
> ** Some of the broker configuration options like SSL keystores can now be
> 
>   updated dynamically without restarting the broker. See KIP-226 for
> details
> 
>   and the full list of dynamic configs.
> 
> 
> ** Delegation token based authentication (KIP-48) has been added to Kafka
> 
>   brokers to support large number of clients without overloading Kerberos
> 
>   KDCs or other authentication servers.
> 
> 
> ** Several new features have been added to Kafka Connect, including header
> 
>   support (KIP-145), SSL and Kafka cluster identifiers in the Connect REST
> 
>   interface (KIP-208 and KIP-238), validation of connector names (KIP-212)
> 
>   and support for topic regex in sink connectors (KIP-215). Additionally,
> 
>   the default maximum heap size for Connect workers was increased to 2GB.
> 
> 
> ** Several improvements have been added to the Kafka Streams API, including
> 
>   reducing repartition topic partitions footprint, customizable error
> 
>   handling for produce failures and enhanced resilience to broker
> 
>   unavailability.  See KIPs 205, 210, 220, 224 and 239 for details.
> 
> 
> All of the changes in this release can be found in the release notes:
> 
> 
> 
> https://dist.apache.org/repos/dist/release/kafka/1.1.0/RELEASE_NOTES.html
> 
> 
> 
> 
> You can download the source release from:
> 
> 
> 
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka-1.1.0-src.tgz
> 
> 
> 
> and binary releases from:
> 
> 
> 
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz
> 
> (Scala 2.11)
> 
> 
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.12-1.1.0.tgz
> 
> (Scala 2.12)
> 
> 
> --
> 
> 
> 
> Apache Kafka is a distributed streaming platform with four core APIs:
> 
> 
> 
> ** The Producer API allows an application to publish a stream records to
> 
> one or more Kafka topics.
> 
> 
> 
> ** The Consumer API allows an application to subscribe to one or more
> 
> topics and process the stream of records produced to them.
> 
> 
> 
> ** The Streams API allows an application to act as a stream processor,
> 
> consuming an input stream from one or more topics and producing an output
> 
> stream to one or more output topics, effectively transforming the input
> 
> streams to output streams.
> 
> 
> 
> ** The Connector API allows building and running reusable producers or
> 
> consumers that connect Kafka topics to existing applications or data
> 
> systems. For example, a connector to a relational database might capture
> 
> every change to a table.three key capabilities:
> 
> 
> 
> 
> With these APIs, Kafka can be used for two broad classes of application:
> 
> ** Building real-time streaming data pipelines that reliably get data
> 
> between systems or applications.
> 
> 
> 
> ** Building real-time streaming applications that transform or react to the
> 
> streams of data.
> 
> 
> 
> 
> Apache Kafka is in use at large and small companies worldwide, including
> 
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> 
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
> 
> 
> 
> 
> A big thank you for the following 120 contributors to this release!
> 
> 
> Adem Efe Gencer, Alex Good, Andras Beni, Andy Bryant, Antony Stubbs,
> 
> Apurva Mehta, Arjun Satish, bartdevylder, Bill Bejeck, Charly Molter,
> 
> Chris Egerton, Clemens Valiente, cmolter, Colin P. Mccabe,
> 
> Colin Patrick McCabe, ConcurrencyPractitioner, Damian Guy, dan norwood,
> 
> Daniel Wojda, Derrick Or, Dmitry Minkovsky, Dong Lin, Edoardo Comar,
> 
> ekenny, Elyahou, Eugene Sevastyanov, Ewen Cheslack-Postava, Filipe Agapito,
> 
> fredfp, Gavrie Philipson, Gunnar Morling, Guozhang Wang, hmcl, Hugo Louro,
> 
> huxi, huxihx, Igor Kostiakov, Ismael Juma, Ivan Babrou, Jacek Laskowski,
> 
> Jakub Scholz, Jason Gustafson, Jeff Klukas, Jeff Widman, Jeremy
> Custenborder,
> 
> Jeyhun Karimov, Jiangjie (Becket) Qin, 

Re: [ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread Ismael Juma
Thanks to Damian and Rajini for running the release and thanks to everyone
who helped make it happen!

Ismael

On Thu, Mar 29, 2018 at 2:27 AM, Rajini Sivaram  wrote:

> The Apache Kafka community is pleased to announce the release for
>
> Apache Kafka 1.1.0.
>
>
> Kafka 1.1.0 includes a number of significant new features.
>
> Here is a summary of some notable changes:
>
>
> ** Kafka 1.1.0 includes significant improvements to the Kafka Controller
>
>that speed up controlled shutdown. ZooKeeper session expiration edge
> cases
>
>have also been fixed as part of this effort.
>
>
> ** Controller improvements also enable more partitions to be supported on a
>
>single cluster. KIP-227 introduced incremental fetch requests, providing
>
>more efficient replication when the number of partitions is large.
>
>
> ** KIP-113 added support for replica movement between log directories to
>
>enable data balancing with JBOD.
>
>
> ** Some of the broker configuration options like SSL keystores can now be
>
>updated dynamically without restarting the broker. See KIP-226 for
> details
>
>and the full list of dynamic configs.
>
>
> ** Delegation token based authentication (KIP-48) has been added to Kafka
>
>brokers to support large number of clients without overloading Kerberos
>
>KDCs or other authentication servers.
>
>
> ** Several new features have been added to Kafka Connect, including header
>
>support (KIP-145), SSL and Kafka cluster identifiers in the Connect REST
>
>interface (KIP-208 and KIP-238), validation of connector names (KIP-212)
>
>and support for topic regex in sink connectors (KIP-215). Additionally,
>
>the default maximum heap size for Connect workers was increased to 2GB.
>
>
> ** Several improvements have been added to the Kafka Streams API, including
>
>reducing repartition topic partitions footprint, customizable error
>
>handling for produce failures and enhanced resilience to broker
>
>unavailability.  See KIPs 205, 210, 220, 224 and 239 for details.
>
>
> All of the changes in this release can be found in the release notes:
>
>
>
> https://dist.apache.org/repos/dist/release/kafka/1.1.0/RELEASE_NOTES.html
>
>
>
>
> You can download the source release from:
>
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/
> kafka-1.1.0-src.tgz
>
>
>
> and binary releases from:
>
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/
> kafka_2.11-1.1.0.tgz
>
> (Scala 2.11)
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/
> kafka_2.12-1.1.0.tgz
>
> (Scala 2.12)
>
>
> 
> --
>
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
>
> ** The Producer API allows an application to publish a stream records to
>
> one or more Kafka topics.
>
>
>
> ** The Consumer API allows an application to subscribe to one or more
>
> topics and process the stream of records produced to them.
>
>
>
> ** The Streams API allows an application to act as a stream processor,
>
> consuming an input stream from one or more topics and producing an output
>
> stream to one or more output topics, effectively transforming the input
>
> streams to output streams.
>
>
>
> ** The Connector API allows building and running reusable producers or
>
> consumers that connect Kafka topics to existing applications or data
>
> systems. For example, a connector to a relational database might capture
>
> every change to a table.three key capabilities:
>
>
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
>
> between systems or applications.
>
>
>
> ** Building real-time streaming applications that transform or react to the
>
> streams of data.
>
>
>
>
> Apache Kafka is in use at large and small companies worldwide, including
>
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
>
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
>
>
>
> A big thank you for the following 120 contributors to this release!
>
>
> Adem Efe Gencer, Alex Good, Andras Beni, Andy Bryant, Antony Stubbs,
>
> Apurva Mehta, Arjun Satish, bartdevylder, Bill Bejeck, Charly Molter,
>
> Chris Egerton, Clemens Valiente, cmolter, Colin P. Mccabe,
>
> Colin Patrick McCabe, ConcurrencyPractitioner, Damian Guy, dan norwood,
>
> Daniel Wojda, Derrick Or, Dmitry Minkovsky, Dong Lin, Edoardo Comar,
>
> ekenny, Elyahou, Eugene Sevastyanov, Ewen Cheslack-Postava, Filipe Agapito,
>
> fredfp, Gavrie Philipson, Gunnar Morling, Guozhang Wang, hmcl, Hugo Louro,
>
> huxi, huxihx, Igor Kostiakov, Ismael Juma, Ivan Babrou, Jacek Laskowski,
>
> Jakub Scholz, Jason Gustafson, Jeff Klukas, Jeff Widman, Jeremy
> Custenborder,
>
> Jeyhun Karimov, Jiangjie (Becket) Qin, Jiangjie Qin, Jimin Hsieh, Joel
> Hamill,
>
> John Roesler, Jorge Quilcate Otoya, Jun 

Kafka Stream - Building KTable in Kafka 1.0.0

2018-03-29 Thread Cedric BERTRAND
Hello,

In the new api 1.0.0 for building KTable, it is written that No internal
changelod topic is created.

public  KTable

table(java.lang.String topic)

Create a KTable

for
the specified topic. The default "auto.offset.reset" strategy and default
key and value deserializers as specified in the config

are
used. Input records

 with null key will be dropped.

Note that the specified input topics must be partitioned by key. If this is
not the case the returned KTable

will
be corrupted.

The resulting KTable

will
be materialized in a local KeyValueStore

with
an internal store name. Note that that store name may not be queriable
through Interactive Queries. *No internal changelog topic is created since
the original input topic can be used for recovery (cf. methods
of KGroupedStream

and KGroupedTable

that
return a KTable
).*
Parameters:topic - the topic name; cannot be nullReturns:a KTable

for
the specified topic

My code is as followed :KTable table = builder.table("my_topic");

When I look at the created topics I can see an internal topic
"application_id-my_topicSTATE-STORE-02-changelog".
Do I missed something ?
Thanks,
Cédric


Re: 答复: [ANNOUNCE] New Committer: Dong Lin

2018-03-29 Thread Viktor Somogyi
Congrats Dong! :)

On Thu, Mar 29, 2018 at 2:12 PM, Satish Duggana 
wrote:

> Congratulations Dong!
>
>
> On Thu, Mar 29, 2018 at 5:12 PM, Sandor Murakozi 
> wrote:
>
> > Congrats, Dong!
> >
> >
> > On Thu, Mar 29, 2018 at 2:15 AM, Dong Lin  wrote:
> >
> > > Thanks everyone!!
> > >
> > > It is my great pleasure to be part of the Apache Kafka community and
> help
> > > make Apache Kafka more useful to its users. I am super excited to be a
> > > Kafka committer and I am hoping to contribute more to its design,
> > > implementation and review etc in the future.
> > >
> > > Thanks!
> > > Dong
> > >
> > > On Wed, Mar 28, 2018 at 4:04 PM, Hu Xi  wrote:
> > >
> > > > Congrats, Dong Lin!
> > > >
> > > >
> > > > 
> > > > 发件人: Matthias J. Sax 
> > > > 发送时间: 2018年3月29日 6:37
> > > > 收件人: users@kafka.apache.org; d...@kafka.apache.org
> > > > 主题: Re: [ANNOUNCE] New Committer: Dong Lin
> > > >
> > > > Congrats!
> > > >
> > > > On 3/28/18 1:16 PM, James Cheng wrote:
> > > > > Congrats, Dong!
> > > > >
> > > > > -James
> > > > >
> > > > >> On Mar 28, 2018, at 10:58 AM, Becket Qin 
> > > wrote:
> > > > >>
> > > > >> Hello everyone,
> > > > >>
> > > > >> The PMC of Apache Kafka is pleased to announce that Dong Lin has
> > > > accepted
> > > > >> our invitation to be a new Kafka committer.
> > > > >>
> > > > >> Dong started working on Kafka about four years ago, since which he
> > has
> > > > >> contributed numerous features and patches. His work on Kafka core
> > has
> > > > been
> > > > >> consistent and important. Among his contributions, most
> noticeably,
> > > Dong
> > > > >> developed JBOD (KIP-112, KIP-113) to handle disk failures and to
> > > reduce
> > > > >> overall cost, added deleteDataBefore() API (KIP-107) to allow
> users
> > > > >> actively remove old messages. Dong has also been active in the
> > > > community,
> > > > >> participating in KIP discussions and doing code reviews.
> > > > >>
> > > > >> Congratulations and looking forward to your future contribution,
> > Dong!
> > > > >>
> > > > >> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> > > > >
> > > >
> > > >
> > >
> >
>


Re: 答复: [ANNOUNCE] New Committer: Dong Lin

2018-03-29 Thread Satish Duggana
Congratulations Dong!


On Thu, Mar 29, 2018 at 5:12 PM, Sandor Murakozi 
wrote:

> Congrats, Dong!
>
>
> On Thu, Mar 29, 2018 at 2:15 AM, Dong Lin  wrote:
>
> > Thanks everyone!!
> >
> > It is my great pleasure to be part of the Apache Kafka community and help
> > make Apache Kafka more useful to its users. I am super excited to be a
> > Kafka committer and I am hoping to contribute more to its design,
> > implementation and review etc in the future.
> >
> > Thanks!
> > Dong
> >
> > On Wed, Mar 28, 2018 at 4:04 PM, Hu Xi  wrote:
> >
> > > Congrats, Dong Lin!
> > >
> > >
> > > 
> > > 发件人: Matthias J. Sax 
> > > 发送时间: 2018年3月29日 6:37
> > > 收件人: users@kafka.apache.org; d...@kafka.apache.org
> > > 主题: Re: [ANNOUNCE] New Committer: Dong Lin
> > >
> > > Congrats!
> > >
> > > On 3/28/18 1:16 PM, James Cheng wrote:
> > > > Congrats, Dong!
> > > >
> > > > -James
> > > >
> > > >> On Mar 28, 2018, at 10:58 AM, Becket Qin 
> > wrote:
> > > >>
> > > >> Hello everyone,
> > > >>
> > > >> The PMC of Apache Kafka is pleased to announce that Dong Lin has
> > > accepted
> > > >> our invitation to be a new Kafka committer.
> > > >>
> > > >> Dong started working on Kafka about four years ago, since which he
> has
> > > >> contributed numerous features and patches. His work on Kafka core
> has
> > > been
> > > >> consistent and important. Among his contributions, most noticeably,
> > Dong
> > > >> developed JBOD (KIP-112, KIP-113) to handle disk failures and to
> > reduce
> > > >> overall cost, added deleteDataBefore() API (KIP-107) to allow users
> > > >> actively remove old messages. Dong has also been active in the
> > > community,
> > > >> participating in KIP discussions and doing code reviews.
> > > >>
> > > >> Congratulations and looking forward to your future contribution,
> Dong!
> > > >>
> > > >> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> > > >
> > >
> > >
> >
>


Re: 答复: [ANNOUNCE] New Committer: Dong Lin

2018-03-29 Thread Sandor Murakozi
Congrats, Dong!


On Thu, Mar 29, 2018 at 2:15 AM, Dong Lin  wrote:

> Thanks everyone!!
>
> It is my great pleasure to be part of the Apache Kafka community and help
> make Apache Kafka more useful to its users. I am super excited to be a
> Kafka committer and I am hoping to contribute more to its design,
> implementation and review etc in the future.
>
> Thanks!
> Dong
>
> On Wed, Mar 28, 2018 at 4:04 PM, Hu Xi  wrote:
>
> > Congrats, Dong Lin!
> >
> >
> > 
> > 发件人: Matthias J. Sax 
> > 发送时间: 2018年3月29日 6:37
> > 收件人: users@kafka.apache.org; d...@kafka.apache.org
> > 主题: Re: [ANNOUNCE] New Committer: Dong Lin
> >
> > Congrats!
> >
> > On 3/28/18 1:16 PM, James Cheng wrote:
> > > Congrats, Dong!
> > >
> > > -James
> > >
> > >> On Mar 28, 2018, at 10:58 AM, Becket Qin 
> wrote:
> > >>
> > >> Hello everyone,
> > >>
> > >> The PMC of Apache Kafka is pleased to announce that Dong Lin has
> > accepted
> > >> our invitation to be a new Kafka committer.
> > >>
> > >> Dong started working on Kafka about four years ago, since which he has
> > >> contributed numerous features and patches. His work on Kafka core has
> > been
> > >> consistent and important. Among his contributions, most noticeably,
> Dong
> > >> developed JBOD (KIP-112, KIP-113) to handle disk failures and to
> reduce
> > >> overall cost, added deleteDataBefore() API (KIP-107) to allow users
> > >> actively remove old messages. Dong has also been active in the
> > community,
> > >> participating in KIP discussions and doing code reviews.
> > >>
> > >> Congratulations and looking forward to your future contribution, Dong!
> > >>
> > >> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> > >
> >
> >
>


Re: Need help in kafka hdfs connector

2018-03-29 Thread Amrit Jangid
Try this : https://github.com/pinterest/secor



On Thu, Mar 29, 2018 at 2:36 PM, Santosh Kumar J P <
santoshkumar...@gmail.com> wrote:

> Hi,
>
> Do we have any other Kafka HDFS connectors implementation other than
> Confluent HDFS connector.
>
> Thank you,
> Regards,
> Santosh
>


Re: [ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread Edoardo Comar
Great to hear! thanks for driving the release process.
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Mickael Maison 
To: Users 
Cc: kafka-clients , dev 

Date:   29/03/2018 10:46
Subject:Re: [ANNOUNCE] Apache Kafka 1.1.0 Released



Great news, thanks Damian and Rajini for running this release!

On Thu, Mar 29, 2018 at 10:33 AM, Rajini Sivaram
 wrote:
> Resending to kaka-clients group:
>
> -- Forwarded message --
> From: Rajini Sivaram 
> Date: Thu, Mar 29, 2018 at 10:27 AM
> Subject: [ANNOUNCE] Apache Kafka 1.1.0 Released
> To: annou...@apache.org, Users , dev <
> d...@kafka.apache.org>, kafka-clients 
>
>
> The Apache Kafka community is pleased to announce the release for
>
> Apache Kafka 1.1.0.
>
>
> Kafka 1.1.0 includes a number of significant new features.
>
> Here is a summary of some notable changes:
>
>
> ** Kafka 1.1.0 includes significant improvements to the Kafka Controller
>
>that speed up controlled shutdown. ZooKeeper session expiration edge
> cases
>
>have also been fixed as part of this effort.
>
>
> ** Controller improvements also enable more partitions to be supported 
on a
>
>single cluster. KIP-227 introduced incremental fetch requests, 
providing
>
>more efficient replication when the number of partitions is large.
>
>
> ** KIP-113 added support for replica movement between log directories to
>
>enable data balancing with JBOD.
>
>
> ** Some of the broker configuration options like SSL keystores can now 
be
>
>updated dynamically without restarting the broker. See KIP-226 for
> details
>
>and the full list of dynamic configs.
>
>
> ** Delegation token based authentication (KIP-48) has been added to 
Kafka
>
>brokers to support large number of clients without overloading 
Kerberos
>
>KDCs or other authentication servers.
>
>
> ** Several new features have been added to Kafka Connect, including 
header
>
>support (KIP-145), SSL and Kafka cluster identifiers in the Connect 
REST
>
>interface (KIP-208 and KIP-238), validation of connector names 
(KIP-212)
>
>and support for topic regex in sink connectors (KIP-215). 
Additionally,
>
>the default maximum heap size for Connect workers was increased to 
2GB.
>
>
> ** Several improvements have been added to the Kafka Streams API, 
including
>
>reducing repartition topic partitions footprint, customizable error
>
>handling for produce failures and enhanced resilience to broker
>
>unavailability.  See KIPs 205, 210, 220, 224 and 239 for details.
>
>
> All of the changes in this release can be found in the release notes:
>
>
>
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__dist.apache.org_repos_dist_release_kafka_1.1.0_RELEASE-5FNOTES.html=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=hcT9l8smi-Mzd7IISQuozrFaicWvFgNLeI3qS-iAH5I=K-fcOSRqIsNLv7Ffi2OLvPk1BrdrmxRaM0O9bUUvzFY=

>
>
>
>
> You can download the source release from:
>
>
>
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.apache.org_dyn_closer.cgi-3Fpath-3D_kafka_1.1.0_kafka-2D1.1.0-2Dsrc.tgz=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=hcT9l8smi-Mzd7IISQuozrFaicWvFgNLeI3qS-iAH5I=ngOY3Ljm4YLxr-prOs8mDRjvDSLo-Wtq_i6ttpcwTPg=

>
>
>
> and binary releases from:
>
>
>
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.apache.org_dyn_closer.cgi-3Fpath-3D_kafka_1.1.0_kafka-5F2.11-2D1.1.0.tgz=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=hcT9l8smi-Mzd7IISQuozrFaicWvFgNLeI3qS-iAH5I=VKdrqsCBxq9gqE4lOyULufMnALwmReTva42dx5NiuUk=

>
> (Scala 2.11)
>
>
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.apache.org_dyn_closer.cgi-3Fpath-3D_kafka_1.1.0_kafka-5F2.12-2D1.1.0.tgz=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=hcT9l8smi-Mzd7IISQuozrFaicWvFgNLeI3qS-iAH5I=IS_fkPnoQlL2-dHZzVGTJajOtrUFrRoi0r37D0O5qL8=

>
> (Scala 2.12)
>
>
> 
> --
>
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
>
> ** The Producer API allows an application to publish a stream records to
>
> one or more Kafka topics.
>
>
>
> ** The Consumer API allows an application to subscribe to one or more
>
> topics and process the stream of records produced to them.
>
>
>
> ** The Streams API allows an application to act as a stream processor,
>
> consuming an input stream from one or more topics and producing an 
output
>
> stream to one or more output topics, effectively transforming the input
>
> streams to output streams.
>
>
>
> ** The Connector API allows building and running reusable producers or
>
> 

Re: partition selection with message key

2018-03-29 Thread Manikumar
Yes. As long as we use same partitioner and have same number of partitions,
messages with same key will go to same partition.

On Thu, Mar 29, 2018 at 3:11 PM, Victor L  wrote:

> I am looking for best method to keep consumption of messages in the same
> order as client produced them, one thing i am looking at is using of
> message key to select partition:
> If a valid partition number is specified that partition will be used when
> sending the record. If no partition is specified but a key is present a
> partition will be chosen using a hash of the key.
> Does that mean the same key for messages to be placed into the same
> partition in order produced?
> Thank you,
>


Re: [ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread Mickael Maison
Great news, thanks Damian and Rajini for running this release!

On Thu, Mar 29, 2018 at 10:33 AM, Rajini Sivaram
 wrote:
> Resending to kaka-clients group:
>
> -- Forwarded message --
> From: Rajini Sivaram 
> Date: Thu, Mar 29, 2018 at 10:27 AM
> Subject: [ANNOUNCE] Apache Kafka 1.1.0 Released
> To: annou...@apache.org, Users , dev <
> d...@kafka.apache.org>, kafka-clients 
>
>
> The Apache Kafka community is pleased to announce the release for
>
> Apache Kafka 1.1.0.
>
>
> Kafka 1.1.0 includes a number of significant new features.
>
> Here is a summary of some notable changes:
>
>
> ** Kafka 1.1.0 includes significant improvements to the Kafka Controller
>
>that speed up controlled shutdown. ZooKeeper session expiration edge
> cases
>
>have also been fixed as part of this effort.
>
>
> ** Controller improvements also enable more partitions to be supported on a
>
>single cluster. KIP-227 introduced incremental fetch requests, providing
>
>more efficient replication when the number of partitions is large.
>
>
> ** KIP-113 added support for replica movement between log directories to
>
>enable data balancing with JBOD.
>
>
> ** Some of the broker configuration options like SSL keystores can now be
>
>updated dynamically without restarting the broker. See KIP-226 for
> details
>
>and the full list of dynamic configs.
>
>
> ** Delegation token based authentication (KIP-48) has been added to Kafka
>
>brokers to support large number of clients without overloading Kerberos
>
>KDCs or other authentication servers.
>
>
> ** Several new features have been added to Kafka Connect, including header
>
>support (KIP-145), SSL and Kafka cluster identifiers in the Connect REST
>
>interface (KIP-208 and KIP-238), validation of connector names (KIP-212)
>
>and support for topic regex in sink connectors (KIP-215). Additionally,
>
>the default maximum heap size for Connect workers was increased to 2GB.
>
>
> ** Several improvements have been added to the Kafka Streams API, including
>
>reducing repartition topic partitions footprint, customizable error
>
>handling for produce failures and enhanced resilience to broker
>
>unavailability.  See KIPs 205, 210, 220, 224 and 239 for details.
>
>
> All of the changes in this release can be found in the release notes:
>
>
>
> https://dist.apache.org/repos/dist/release/kafka/1.1.0/RELEASE_NOTES.html
>
>
>
>
> You can download the source release from:
>
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka-1.1.0-src.tgz
>
>
>
> and binary releases from:
>
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz
>
> (Scala 2.11)
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.12-1.1.0.tgz
>
> (Scala 2.12)
>
>
> 
> --
>
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
>
> ** The Producer API allows an application to publish a stream records to
>
> one or more Kafka topics.
>
>
>
> ** The Consumer API allows an application to subscribe to one or more
>
> topics and process the stream of records produced to them.
>
>
>
> ** The Streams API allows an application to act as a stream processor,
>
> consuming an input stream from one or more topics and producing an output
>
> stream to one or more output topics, effectively transforming the input
>
> streams to output streams.
>
>
>
> ** The Connector API allows building and running reusable producers or
>
> consumers that connect Kafka topics to existing applications or data
>
> systems. For example, a connector to a relational database might capture
>
> every change to a table.three key capabilities:
>
>
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
>
> between systems or applications.
>
>
>
> ** Building real-time streaming applications that transform or react to the
>
> streams of data.
>
>
>
>
> Apache Kafka is in use at large and small companies worldwide, including
>
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
>
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
>
>
>
> A big thank you for the following 120 contributors to this release!
>
>
> Adem Efe Gencer, Alex Good, Andras Beni, Andy Bryant, Antony Stubbs,
>
> Apurva Mehta, Arjun Satish, bartdevylder, Bill Bejeck, Charly Molter,
>
> Chris Egerton, Clemens Valiente, cmolter, Colin P. Mccabe,
>
> Colin Patrick McCabe, ConcurrencyPractitioner, Damian Guy, dan norwood,
>
> Daniel Wojda, Derrick Or, Dmitry Minkovsky, Dong Lin, Edoardo Comar,
>
> ekenny, Elyahou, Eugene Sevastyanov, Ewen Cheslack-Postava, Filipe Agapito,
>
> fredfp, Gavrie Philipson, Gunnar Morling, Guozhang Wang, hmcl, 

partition selection with message key

2018-03-29 Thread Victor L
I am looking for best method to keep consumption of messages in the same
order as client produced them, one thing i am looking at is using of
message key to select partition:
If a valid partition number is specified that partition will be used when
sending the record. If no partition is specified but a key is present a
partition will be chosen using a hash of the key.
Does that mean the same key for messages to be placed into the same
partition in order produced?
Thank you,


Fwd: [ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread Rajini Sivaram
Resending to kaka-clients group:

-- Forwarded message --
From: Rajini Sivaram 
Date: Thu, Mar 29, 2018 at 10:27 AM
Subject: [ANNOUNCE] Apache Kafka 1.1.0 Released
To: annou...@apache.org, Users , dev <
d...@kafka.apache.org>, kafka-clients 


The Apache Kafka community is pleased to announce the release for

Apache Kafka 1.1.0.


Kafka 1.1.0 includes a number of significant new features.

Here is a summary of some notable changes:


** Kafka 1.1.0 includes significant improvements to the Kafka Controller

   that speed up controlled shutdown. ZooKeeper session expiration edge
cases

   have also been fixed as part of this effort.


** Controller improvements also enable more partitions to be supported on a

   single cluster. KIP-227 introduced incremental fetch requests, providing

   more efficient replication when the number of partitions is large.


** KIP-113 added support for replica movement between log directories to

   enable data balancing with JBOD.


** Some of the broker configuration options like SSL keystores can now be

   updated dynamically without restarting the broker. See KIP-226 for
details

   and the full list of dynamic configs.


** Delegation token based authentication (KIP-48) has been added to Kafka

   brokers to support large number of clients without overloading Kerberos

   KDCs or other authentication servers.


** Several new features have been added to Kafka Connect, including header

   support (KIP-145), SSL and Kafka cluster identifiers in the Connect REST

   interface (KIP-208 and KIP-238), validation of connector names (KIP-212)

   and support for topic regex in sink connectors (KIP-215). Additionally,

   the default maximum heap size for Connect workers was increased to 2GB.


** Several improvements have been added to the Kafka Streams API, including

   reducing repartition topic partitions footprint, customizable error

   handling for produce failures and enhanced resilience to broker

   unavailability.  See KIPs 205, 210, 220, 224 and 239 for details.


All of the changes in this release can be found in the release notes:



https://dist.apache.org/repos/dist/release/kafka/1.1.0/RELEASE_NOTES.html




You can download the source release from:



https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka-1.1.0-src.tgz



and binary releases from:



https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz

(Scala 2.11)


https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.12-1.1.0.tgz

(Scala 2.12)



--



Apache Kafka is a distributed streaming platform with four core APIs:



** The Producer API allows an application to publish a stream records to

one or more Kafka topics.



** The Consumer API allows an application to subscribe to one or more

topics and process the stream of records produced to them.



** The Streams API allows an application to act as a stream processor,

consuming an input stream from one or more topics and producing an output

stream to one or more output topics, effectively transforming the input

streams to output streams.



** The Connector API allows building and running reusable producers or

consumers that connect Kafka topics to existing applications or data

systems. For example, a connector to a relational database might capture

every change to a table.three key capabilities:




With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data

between systems or applications.



** Building real-time streaming applications that transform or react to the

streams of data.




Apache Kafka is in use at large and small companies worldwide, including

Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,

Target, The New York Times, Uber, Yelp, and Zalando, among others.




A big thank you for the following 120 contributors to this release!


Adem Efe Gencer, Alex Good, Andras Beni, Andy Bryant, Antony Stubbs,

Apurva Mehta, Arjun Satish, bartdevylder, Bill Bejeck, Charly Molter,

Chris Egerton, Clemens Valiente, cmolter, Colin P. Mccabe,

Colin Patrick McCabe, ConcurrencyPractitioner, Damian Guy, dan norwood,

Daniel Wojda, Derrick Or, Dmitry Minkovsky, Dong Lin, Edoardo Comar,

ekenny, Elyahou, Eugene Sevastyanov, Ewen Cheslack-Postava, Filipe Agapito,

fredfp, Gavrie Philipson, Gunnar Morling, Guozhang Wang, hmcl, Hugo Louro,

huxi, huxihx, Igor Kostiakov, Ismael Juma, Ivan Babrou, Jacek Laskowski,

Jakub Scholz, Jason Gustafson, Jeff Klukas, Jeff Widman, Jeremy
Custenborder,

Jeyhun Karimov, Jiangjie (Becket) Qin, Jiangjie Qin, Jimin Hsieh, Joel
Hamill,

John Roesler, Jorge Quilcate Otoya, Jun Rao, Kamal C, Kamil Szymański,

Koen De Groote, Konstantine Karantasis, lisa2lisa, Logan Buckley,

Magnus Edenhill, Magnus 

[ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread Rajini Sivaram
The Apache Kafka community is pleased to announce the release for

Apache Kafka 1.1.0.


Kafka 1.1.0 includes a number of significant new features.

Here is a summary of some notable changes:


** Kafka 1.1.0 includes significant improvements to the Kafka Controller

   that speed up controlled shutdown. ZooKeeper session expiration edge
cases

   have also been fixed as part of this effort.


** Controller improvements also enable more partitions to be supported on a

   single cluster. KIP-227 introduced incremental fetch requests, providing

   more efficient replication when the number of partitions is large.


** KIP-113 added support for replica movement between log directories to

   enable data balancing with JBOD.


** Some of the broker configuration options like SSL keystores can now be

   updated dynamically without restarting the broker. See KIP-226 for
details

   and the full list of dynamic configs.


** Delegation token based authentication (KIP-48) has been added to Kafka

   brokers to support large number of clients without overloading Kerberos

   KDCs or other authentication servers.


** Several new features have been added to Kafka Connect, including header

   support (KIP-145), SSL and Kafka cluster identifiers in the Connect REST

   interface (KIP-208 and KIP-238), validation of connector names (KIP-212)

   and support for topic regex in sink connectors (KIP-215). Additionally,

   the default maximum heap size for Connect workers was increased to 2GB.


** Several improvements have been added to the Kafka Streams API, including

   reducing repartition topic partitions footprint, customizable error

   handling for produce failures and enhanced resilience to broker

   unavailability.  See KIPs 205, 210, 220, 224 and 239 for details.


All of the changes in this release can be found in the release notes:



https://dist.apache.org/repos/dist/release/kafka/1.1.0/RELEASE_NOTES.html




You can download the source release from:



https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka-1.1.0-src.tgz



and binary releases from:



https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz

(Scala 2.11)


https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.12-1.1.0.tgz

(Scala 2.12)


--



Apache Kafka is a distributed streaming platform with four core APIs:



** The Producer API allows an application to publish a stream records to

one or more Kafka topics.



** The Consumer API allows an application to subscribe to one or more

topics and process the stream of records produced to them.



** The Streams API allows an application to act as a stream processor,

consuming an input stream from one or more topics and producing an output

stream to one or more output topics, effectively transforming the input

streams to output streams.



** The Connector API allows building and running reusable producers or

consumers that connect Kafka topics to existing applications or data

systems. For example, a connector to a relational database might capture

every change to a table.three key capabilities:




With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data

between systems or applications.



** Building real-time streaming applications that transform or react to the

streams of data.




Apache Kafka is in use at large and small companies worldwide, including

Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,

Target, The New York Times, Uber, Yelp, and Zalando, among others.




A big thank you for the following 120 contributors to this release!


Adem Efe Gencer, Alex Good, Andras Beni, Andy Bryant, Antony Stubbs,

Apurva Mehta, Arjun Satish, bartdevylder, Bill Bejeck, Charly Molter,

Chris Egerton, Clemens Valiente, cmolter, Colin P. Mccabe,

Colin Patrick McCabe, ConcurrencyPractitioner, Damian Guy, dan norwood,

Daniel Wojda, Derrick Or, Dmitry Minkovsky, Dong Lin, Edoardo Comar,

ekenny, Elyahou, Eugene Sevastyanov, Ewen Cheslack-Postava, Filipe Agapito,

fredfp, Gavrie Philipson, Gunnar Morling, Guozhang Wang, hmcl, Hugo Louro,

huxi, huxihx, Igor Kostiakov, Ismael Juma, Ivan Babrou, Jacek Laskowski,

Jakub Scholz, Jason Gustafson, Jeff Klukas, Jeff Widman, Jeremy
Custenborder,

Jeyhun Karimov, Jiangjie (Becket) Qin, Jiangjie Qin, Jimin Hsieh, Joel
Hamill,

John Roesler, Jorge Quilcate Otoya, Jun Rao, Kamal C, Kamil Szymański,

Koen De Groote, Konstantine Karantasis, lisa2lisa, Logan Buckley,

Magnus Edenhill, Magnus Reftel, Manikumar Reddy, Manikumar Reddy O,
manjuapu,

Manjula K, Mats Julian Olsen, Matt Farmer, Matthias J. Sax,

Matthias Wessendorf, Max Zheng, Maytee Chinavanichkit, Mickael Maison,
Mikkin,

mulvenna, Narendra kumar, Nick Chiu, Onur Karaman, Panuwat Anawatmongkhon,

Paolo Patierno, parafiend, ppatierno, Prasanna Gautam, Radai 

Need help in kafka hdfs connector

2018-03-29 Thread Santosh Kumar J P
Hi,

Do we have any other Kafka HDFS connectors implementation other than
Confluent HDFS connector.

Thank you,
Regards,
Santosh


Re: 答复: [ANNOUNCE] New Committer: Dong Lin

2018-03-29 Thread Edoardo Comar
congratulations Dong!
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Hu Xi 
To: "users@kafka.apache.org" , 
"d...@kafka.apache.org" 
Date:   29/03/2018 00:04
Subject:答复: [ANNOUNCE] New Committer: Dong Lin



Congrats, Dong Lin!



发件人: Matthias J. Sax 
发送时间: 2018年3月29日 6:37
收件人: users@kafka.apache.org; d...@kafka.apache.org
主题: Re: [ANNOUNCE] New Committer: Dong Lin

Congrats!

On 3/28/18 1:16 PM, James Cheng wrote:
> Congrats, Dong!
>
> -James
>
>> On Mar 28, 2018, at 10:58 AM, Becket Qin  wrote:
>>
>> Hello everyone,
>>
>> The PMC of Apache Kafka is pleased to announce that Dong Lin has 
accepted
>> our invitation to be a new Kafka committer.
>>
>> Dong started working on Kafka about four years ago, since which he has
>> contributed numerous features and patches. His work on Kafka core has 
been
>> consistent and important. Among his contributions, most noticeably, 
Dong
>> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
>> overall cost, added deleteDataBefore() API (KIP-107) to allow users
>> actively remove old messages. Dong has also been active in the 
community,
>> participating in KIP discussions and doing code reviews.
>>
>> Congratulations and looking forward to your future contribution, Dong!
>>
>> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
>




Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU