Thanks for driving the release Damian.
> On Sep 13, 2017, at 1:18 PM, Guozhang Wang wrote:
>
> Thanks for driving this Damian!
>
>
> Guozhang
>
>> On Wed, Sep 13, 2017 at 4:36 AM, Damian Guy wrote:
>>
>> The Apache Kafka community is pleased to
Congrats Damian! Thanks for all your contributions.
On Fri, Jun 9, 2017 at 2:52 PM, Martin Gainty wrote:
> congratulations damian!
>
>
> Martin
>
>
>
> From: Gwen Shapira
> Sent: Friday, June 9, 2017 4:55 PM
> To:
+1
On Wed, May 10, 2017 at 9:45 AM, Neha Narkhede wrote:
> +1
>
> On Wed, May 10, 2017 at 12:32 PM Gwen Shapira wrote:
>
> > +1. Also not sure that adding a parameter to a CLI requires a KIP. It
> seems
> > excessive.
> >
> >
> > On Tue, May 9, 2017 at
Congrats Rajini!
On Mon, Apr 24, 2017 at 2:21 PM, Onur Karaman
wrote:
> Congrats!
>
> On Mon, Apr 24, 2017 at 2:20 PM, Guozhang Wang wrote:
>
> > Congrats Rajini!
> >
> > Guozhang
> >
> > On Mon, Apr 24, 2017 at 2:08 PM, Vahid S Hashemian <
> >
StreamsBuilder would be my vote.
> On Mar 13, 2017, at 9:42 PM, Jay Kreps wrote:
>
> Hey Matthias,
>
> Make sense, I'm more advocating for removing the word topology than any
> particular new replacement.
>
> -Jay
>
> On Mon, Mar 13, 2017 at 12:30 PM, Matthias J. Sax
Thanks Ewen for driving this.
On Wed, Feb 22, 2017 at 12:40 AM, Guozhang Wang wrote:
> Thanks Ewen for driving the release!
>
> Guozhang
>
> On Wed, Feb 22, 2017 at 12:33 AM, Ewen Cheslack-Postava >
> wrote:
>
> > The Apache Kafka community is pleased to
Congratulations Grant!
On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira wrote:
> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
>
+1
On Wed, Jan 11, 2017 at 11:10 AM, Ismael Juma wrote:
> Thanks for raising this, +1.
>
> Ismael
>
> On Wed, Jan 11, 2017 at 6:56 PM, Ben Stopford wrote:
>
> > Looks like there was a good consensus on the discuss thread for KIP-106
> so
> > lets move to a
Congratulations!
On Mon, Oct 31, 2016 at 12:23 PM, Ismael Juma wrote:
> Congratulations Becket. :)
>
> Ismael
>
> On 31 Oct 2016 1:44 pm, "Joel Koshy" wrote:
>
> > The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> > committer and
-1 for all the reasons that have been described before. This does not need
to be part of the core project.
On Tue, Oct 25, 2016 at 3:25 PM, Suresh Srinivas
wrote:
> +1.
>
> This is an http access to core Kafka. This is very much needed as part of
> Apache Kafka under ASF
Congratulations Jason!
On Tue, Sep 6, 2016 at 3:40 PM, Vahid S Hashemian wrote:
> Congratulations Jason on this very well deserved recognition.
>
> --Vahid
>
>
>
> From: Neha Narkhede
> To: "d...@kafka.apache.org" ,
>
Congratulations!
On Tue, Apr 26, 2016 at 6:57 AM, Gwen Shapira wrote:
> Congratulations, very well deserved.
> On Apr 25, 2016 10:53 PM, "Neha Narkhede" wrote:
>
> > The PMC for Apache Kafka has invited Ismael Juma to join as a committer
> and
> > we are
this mixed serialization with marker is itself a
serializer type and should have a serializer of its own...
-Jay
On Fri, Dec 5, 2014 at 3:48 PM, Sriram Subramanian
srsubraman...@linkedin.com.invalid wrote:
This thread has diverged multiple times now and it would be worth
summarizing them
This thread has diverged multiple times now and it would be worth
summarizing them.
There seems to be the following points of discussion -
1. Can we keep the serialization semantics outside the Producer interface
and have simple bytes in / bytes out for the interface (This is what we
have
Auto rebalance is already turned off in 0.8.1.
On 3/18/14 5:56 PM, Neha Narkhede neha.narkh...@gmail.com wrote:
Thanks for giving the 0.8.1 release a spin! A few people have reported
bugs
with delete topic https://issues.apache.org/jira/browse/KAFKA-1310 and
also the automatic leader
+1 on Jun's suggestion.
On 2/10/14 2:01 PM, Jun Rao jun...@gmail.com wrote:
I actually prefer to see those at INFO level. The reason is that the
config
system in an application can be complex. Some configs can be overridden in
different layers and it may not be easy to determine what the final
Is this a bug? Can an unclean shutdown leave the index in a corrupt state
that the broker cannot start (expected state)?
On 10/14/13 4:52 PM, Neha Narkhede neha.narkh...@gmail.com wrote:
It is possible that the unclean broker shutdown left the index in an
inconsistent state. You can delete the
We already have a JIRA for auto rebalance. I would be working on this soon.
KAFKA-930 https://issues.apache.org/jira/browse/KAFKA-930
On 10/11/13 5:39 PM, Guozhang Wang wangg...@gmail.com wrote:
Hello Siyuan,
For the automatic leader re-election, yes we are considering to make it
work. Could
assigning new
partitions to brokers which have more space available). I expect this
will
be something to prioritize in the future versions as well.
Jason
On Wed, Oct 2, 2013 at 1:00 PM, Sriram Subramanian
srsubraman...@linkedin.com wrote:
I agree that we need a unique id and have something
I did take a look at KAFKA-1008 a while back and added some comments.
On 9/9/13 3:52 PM, Jay Kreps jay.kr...@gmail.com wrote:
Cool can we get a reviewer for KAFKA-1008 then? I can take on the other
issue for the checkpoint files.
-Jay
On Mon, Sep 9, 2013 at 3:16 PM, Neha Narkhede
I have few questions before I can help you with your issue -
1. Which version/build of Kafka are you using?
2. Is the current status that you provided below before or after issuing
the reassign replica command?
3. If the answer to 2 above is after issuing the command, how long was
it after which
We need to first decide on the right behavior before optimizing on the
implementation.
Few key goals that I would put forward are -
1. Decoupling compression codec of the producer and the log
2. Ensuring message validity by the server on receiving bytes. This is
done by the iterator today and
One thing to note is that we do support controlled shutdown as part of the
regular shutdown hook in the broker. The wiki was not very clear w.r.t
this and I have updated it to convey this. You can turn on controlled
shutdown by setting controlled.shutdown.enable to true in kafka config.
This will
We need to improve how the metadata caching works in kafka. Currently, we
have multiple places where we send the updated metadata to the individual
brokers from the controller when the state of the metadata changes. This
is hard to track. What we need to implement is to let the metadata
structure
. The site can drift over time.
Joel
On Fri, Jun 28, 2013 at 2:49 PM, Sriram Subramanian
srsubraman...@linkedin.com wrote:
On 6/28/13 2:48 PM, Sriram Subramanian
srsubraman...@linkedin.com
wrote:
1. I have moved the FAQ to a wiki. I have separated the sections
into
producer
7:17 PM, Sriram Subramanian srsubraman...@linkedin.com
wrote:
Looks much better.
1. We need to update FAQ for 0.8
2. We should probably have a separate section for implementation.
3. The migration tool explanation seems to be hard to get to.
On 6/27/13 5:40 PM, Jay Kreps jay.kr...@gmail.com
On 6/28/13 2:48 PM, Sriram Subramanian srsubraman...@linkedin.com
wrote:
1. I have moved the FAQ to a wiki. I have separated the sections into
producer, consumers and broker related questions. I would still need to
add replication FAQ. The main FAQ will now link to this. Let me know if
you guys
Looks much better.
1. We need to update FAQ for 0.8
2. We should probably have a separate section for implementation.
3. The migration tool explanation seems to be hard to get to.
On 6/27/13 5:40 PM, Jay Kreps jay.kr...@gmail.com wrote:
Hey Folks,
I did a pass on the website. Changes:
1.
The messages will be grouped by their destination broker and further
grouped by topic/partition. The send then happens to each broker with a
list of topic/partitions and messages for them and waits for an
acknowledgement from each broker. This happens sequentially. So, the
messages are
Hey Jason,
The producer on failure initiates a metadata request to refresh its state
and should issue subsequent requests to the new leader. The errors that
you see should only happen once per topic partition per producer. Let me
know if this is not what you see. On the producer end you should
Hi Jason,
A rolling bounce will create an imbalance in the number of leaders
distribution between the brokers and this is not ideal. We do plan to have
the preferred leader election tool integrated into kafka that periodically
balances the leader count across the brokers in the cluster. For now
31 matches
Mail list logo