Congrats Becket!
-Harsha
On Mon, Oct 31, 2016 at 2:13 PM Rajini Sivaram
wrote:
> Congratulations, Becket!
>
> On Mon, Oct 31, 2016 at 8:38 PM, Matthias J. Sax
> wrote:
>
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > Congrats!
> >
> > On 10/31/16 11:01 AM, Renu Tewari wrote:
>
> 在 2016年11月1日,上午10:54,huxi (JIRA) 写道:
>
>
>[
> https://issues.apache.org/jira/browse/KAFKA-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624155#comment-15624155
> ]
>
> huxi commented on KAFKA-4360:
> -
>
> Ex
Hi,
Can someone discuss it in KAFKA-4360, thanks.
> 在 2016年11月1日,上午10:54,huxi (JIRA) 写道:
>
>
> [
> https://issues.apache.org/jira/browse/KAFKA-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624155#comment-15624155
> ]
>
> huxi comment
KAFKA-3802 does seem plausible; I had to restart the brokers again after
the 0.10.1.0 upgrade to change some JVM settings; maybe that touched the
mtime on the files? Not sure why that would make them *more* likely to be
deleted, though, since their mtime should've gone into the future, not into
the
Hi, James,
Thanks for testing and reporting this. What you observed is actually not
the expected behavior in 0.10.1 based on the design. The way that retention
works in 0.10.1 is that if a log segment has at least one message with a
timestamp, we will use the largest timestamp in that segment to d
Incidentally, I'd like to note that this did *not* occur in my testing
environment (which didn't expire any unexpected segments after upgrading),
so if it is a feature, it's certainly a hit-or-miss one.
On Mon, Oct 31, 2016 at 4:14 PM, James Brown wrote:
> I just finished upgrading our main prod
I just finished upgrading our main production cluster to 0.10.1.0 (from
0.9.0.1) with an on-line rolling upgrade, and I noticed something strange —
the leader for one of our big partitions just decided to expire all of the
logs from before the upgrade. I have log.retention.hours set to 336 in my
co
Hi all,
We have some integration tests running on EC2. The test will send some
messages to Kafka(using 0.10.0.0) running on the same EC2 instance. The
topic is created automatically when it gets it first message. However, the
test becomes flaky and we are seeing this error:
NetworkClient:600 - Er
Congratulations, Becket!
On Mon, Oct 31, 2016 at 8:38 PM, Matthias J. Sax
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Congrats!
>
> On 10/31/16 11:01 AM, Renu Tewari wrote:
> > Congratulations Becket!! Absolutely thrilled to hear this. Well
> > deserved!
> >
> > regards renu
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Congrats!
On 10/31/16 11:01 AM, Renu Tewari wrote:
> Congratulations Becket!! Absolutely thrilled to hear this. Well
> deserved!
>
> regards renu
>
>
> On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy
> wrote:
>
>> The PMC for Apache Kafka has invi
Congrats Becket !
On Mon, Oct 31, 2016 at 7:46 PM, Sriram Subramanian wrote:
> Congratulations!
>
> On Mon, Oct 31, 2016 at 12:23 PM, Ismael Juma wrote:
>
>> Congratulations Becket. :)
>>
>> Ismael
>>
>> On 31 Oct 2016 1:44 pm, "Joel Koshy" wrote:
>>
>> > The PMC for Apache Kafka has invited Ji
Congratulations!
On Mon, Oct 31, 2016 at 12:23 PM, Ismael Juma wrote:
> Congratulations Becket. :)
>
> Ismael
>
> On 31 Oct 2016 1:44 pm, "Joel Koshy" wrote:
>
> > The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> > committer and we are pleased to announce that he has ac
Hey Guys
I have noticed similar issues when network goes down on starting of kafka
stream apps especially the store has initialized but the task
initialization is not complete and when the network comes back the
rebalance fails with the above error and I had to restart. as i run many
partitions an
Hi,
One of my broker logs don't show any replica fetcher thread instances. I took a
thread dump and no threads found by name "ReplicaFetcherThread*". I do see
these threads on other broker instances.
This broker is also not part of ISR. Any ideas on what could be wrong?
Thanks,
Ritesh
Congratulations Becket. :)
Ismael
On 31 Oct 2016 1:44 pm, "Joel Koshy" wrote:
> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> committer and we are pleased to announce that he has accepted!
>
> Becket has made significant contributions to Kafka over the last two years
Congrats!
On Mon, Oct 31, 2016 at 8:30 PM, Becket Qin wrote:
> Thanks everyone! It is really awesome to be working with you on Kafka!!
>
> On Mon, Oct 31, 2016 at 11:26 AM, Jun Rao wrote:
>
> > Congratulations, Jiangjie. Thanks for all your contributions to Kafka.
> >
> > Jun
> >
> > On Mon, Oc
Thanks everyone! It is really awesome to be working with you on Kafka!!
On Mon, Oct 31, 2016 at 11:26 AM, Jun Rao wrote:
> Congratulations, Jiangjie. Thanks for all your contributions to Kafka.
>
> Jun
>
> On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
>
> > The PMC for Apache Kafka has in
Congratulations Becket!!
Thanks,
Bharat
On Mon, Oct 31, 2016 at 11:26 AM, Jun Rao wrote:
> Congratulations, Jiangjie. Thanks for all your contributions to Kafka.
>
> Jun
>
> On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
>
> > The PMC for Apache Kafka has invited Jiangjie (Becket) Qin t
Congratulations, Jiangjie. Thanks for all your contributions to Kafka.
Jun
On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> committer and we are pleased to announce that he has accepted!
>
> Becket has made significa
Congrats!
On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> committer and we are pleased to announce that he has accepted!
>
> Becket has made significant contributions to Kafka over the last two years.
> He has been d
Congrats!
On Mon, Oct 31, 2016 at 10:54 AM, Onur Karaman <
okara...@linkedin.com.invalid> wrote:
> Congrats Becket!
>
> On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
>
> > The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> > committer and we are pleased to announce t
Congratulations Becket!! Absolutely thrilled to hear this. Well deserved!
regards
renu
On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> committer and we are pleased to announce that he has accepted!
>
> Becket has m
Congrats Becket!
--Vahid
From: Jason Gustafson
To: Kafka Users
Cc: d...@kafka.apache.org
Date: 10/31/2016 10:56 AM
Subject:Re: [ANNOUNCE] New committer: Jiangjie (Becket) Qin
Great work, Becket!
On Mon, Oct 31, 2016 at 10:54 AM, Onur Karaman <
okara...@linkedin.com.inv
Congrats, Becket!
-James
> On Oct 31, 2016, at 10:35 AM, Joel Koshy wrote:
>
> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> committer and we are pleased to announce that he has accepted!
>
> Becket has made significant contributions to Kafka over the last two years
Congratulations! well deserved :)
On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> committer and we are pleased to announce that he has accepted!
>
> Becket has made significant contributions to Kafka over the last tw
Great work, Becket!
On Mon, Oct 31, 2016 at 10:54 AM, Onur Karaman <
okara...@linkedin.com.invalid> wrote:
> Congrats Becket!
>
> On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
>
> > The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> > committer and we are pleased to
Congrats Becket!
On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> committer and we are pleased to announce that he has accepted!
>
> Becket has made significant contributions to Kafka over the last two years.
> He has
Hi Patrick,
As far as I understand you can achieve that level of isolation by assigning
the BI/Marketing consumers to distinct consumer *groups *for the topic/s,
that you can for instance name accordingly. Within each consumer group you
can then have a Marketing and a BI consumer that will be abl
The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
committer and we are pleased to announce that he has accepted!
Becket has made significant contributions to Kafka over the last two years.
He has been deeply involved in a broad range of KIP discussions and has
contributed sev
Just make sure they are not in the same consumer group by creating a unique
value for group.id for each independent consumer.
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Mon, Oct 31, 2016 at 9:42 AM, Patrick Viet
wrote:
> Hi,
>
> I
Yes! This was what I was looking for:
"
If all the consumer instances have the same consumer group, then the
records will effectively be load balanced over the consumer instances.
If all the consumer instances have different consumer groups, then each
record will be broadcast to all the consumer p
Hi Patrick,
Can you use separate consumer groups to accomplish what you're looking for?
Scroll to the "consumers" section on this document for a more detailed
description:
https://kafka.apache.org/documentation.html#introduction
On Mon, Oct 31, 2016 at 11:42 AM, Patrick Viet
wrote:
> Hi,
>
>
Hi,
I've done some searching and can't find much on this topic.
Here is what I'm looking at:
So a usual usage of kafka is to push messages in a topic, and then have
one/several consumers consume it (one per partition), in a "unique" way: a
message is consumed only once, and if it's consumed multi
>> we cannot just block the whole topic because there could partitions on other
>> brokers which are at least available for reads
That makes sense. So in order to not stop reading or writing from good
partitions we'd need either:
1. This circuit breaker functionality to be in the client itsel
If you use the Confluent Platform it comes with kafka-avro-console-producer
script. It integrates with schema registry to help you validate your avro
data.
On Mon, Oct 31, 2016 at 4:28 AM, ZHU Hua B
wrote:
> Hi,
>
>
> If there is a method to send a Avro payloads within Kafka messages using
> sc
Dear all,
I am aware of the fact that Kafka is pull based, however, I’ve been curious: is
it possible to modify a message between consumer fetch request and before the
message queues up at the consumer side?
Thanks in advance,
Dominik
Hi Frank,
This usually means that another StreamThread has the lock for the state
directory. So it would seem that one of the StreamThreads hasn't shut down
cleanly. If it happens again can you please take a Thread Dump so we can
see what is happening?
Thanks,
Damian
On Sun, 30 Oct 2016 at 10:52
Hi,
If there is a method to send a Avro payloads within Kafka messages using script
"kafka-console-producer.sh"? Thanks!
Best Regards
Johnny
R Krishna,
A particular set of exceptions has to be defined.
At least what I know: timeout exception, network exception.
I am not sure I got the point about auto-recovery.
About incoming messages, we can return the exception that
the client can not access topic/broker/partition and user can properl
39 matches
Mail list logo