i, Feb 21, 2020 at 2:08 PM Vincent Rischmann
> wrote:
>
> > yeah that's not a bad idea.
> >
> > So to recap, it's enough to set follower.replication.throttled.replicas
> > for every partition assigned to that broker ? I'm assuming that during
> > bootstrap the
ist of partitions that was on the broker that you
> replaced. That way you don't need to apply throttle to every topic
> partition in the cluster.
>
> On Fri, Feb 21, 2020 at 4:48 AM Vincent Rischmann
> wrote:
>
> > Hi Brian,
> >
> > thanks for the r
n-house.
>
> Best,
> Brian
>
>
>
> On Thu, Feb 20, 2020 at 5:46 AM Vincent Rischmann
> wrote:
>
> > Hello,
> >
> > We have a cluster of 10 brokers.
> >
> > recently we replaced some broken HDDs on a single broker (id 2 for future
> > reference)
Hello,
We have a cluster of 10 brokers.
recently we replaced some broken HDDs on a single broker (id 2 for future
reference), all data on this broker was erased.
We have a replication factor of 3 minimum on all our topics so no data was lost.
To add the broker to the cluster again I configured
restarted the broker which was the controller at the time just to be sure.
On Wed, Dec 18, 2019, at 00:33, Vincent Rischmann wrote:
> The broker id wasn't reused no, it's a new id.
>
> Unfortunately we can't afford bringing down the cluster, I'll have to
> do this with the cluster online.
gt; > On Dec 16, 2019, at 3:00 AM, Vincent Rischmann wrote:
> >
> > It doesn't exist anymore, we replaced it after a hardware failure.
> >
> > Thinking about it I don't think I reassigned the partitions for broker 5 to
> > the new broker before deleting these t
13, 2019, at 8:55 AM, Vincent Rischmann wrote:
> >
> > Hi,
> >
> > I've deleted a bunch of topics yesterday on our cluster but some are now
> > stuck in "marked for deletion".
> >
> > * i've looked in the data directory of every broker and ther
Hi,
I've deleted a bunch of topics yesterday on our cluster but some are now stuck
in "marked for deletion".
* i've looked in the data directory of every broker and there's no data left
for the topics, the directory doesn't exist anymore.
* in zookeeper the znode `brokers/topics/mytopic` still
Hello,
I have a cluster still on 0.11.0.2 that we're planning to upgrade to 2.2.1
eventually (I don't want to upgrade to 2.3.0 just yet).
I'm aware of the documentation at
https://kafka.apache.org/22/documentation.html#upgrade and I plan to follow
each steps.
I'm just wondering if folks
ly if you're
> topics are very different in terms of system resource usage.
>
> I hope that helps you out a bit. Good luck!
>
> On 3 May 2018 at 03:52, Vincent Rischmann <vinc...@rischmann.fr> wrote:
>
> > Hi,
> >
> > I'm wondering if there is a way to tel
Hi,
I'm wondering if there is a way to tell Kafka to spread the log file
deletion when decreasing the retention time of a topic, and if not, if
it would make sense.
I'm asking because this afternoon, after decreasing the retention time
from 2 months to 1 month on 4 of my topics, the whole cluster
end offset 1236587 in 9547
> > ms (kafka.log.Log)
> >
> > That line says that that partition took 9547ms (9.5 seconds) to
> > load/recover. We had large partitions that took 30 minutes to recover, on
> > that first boot. When I used strace to see what I/O the brok
If anyone else has any idea, I'd love to hear it.
Meanwhile, I'll resume upgrading my brokers and hope it doesn't crash and/or
take so much time for recovery.
On Sat, Jan 6, 2018, at 7:25 PM, Vincent Rischmann wrote:
> Hi,
>
> just to clarify: this is the cause of the crash
se for corruption ?
> > >
> > > This part of code was recently changed by :
> > >
> > > KAFKA-6324; Change LogSegment.delete to deleteIfExists and harden log
> > > recovery
> > >
> > > Cheers
> > >
> > > On Sat, Jan 6,
er offset / log cleaner bugs which caused us similarly
> log delays. that was easily visible by watching the log cleaner activity in
> the logs, and in our monitoring of partition sizes watching them go down,
> along with IO activity on the host for those files.
>
> On Sat, Jan 6,
Hello,
so I'm upgrading my brokers from 0.10.1.1 to 0.11.0.2 to fix this bug
https://issues.apache.org/jira/browse/KAFKA-4523
Unfortunately while stopping one broker, it crashed exactly because of
this bug. No big deal usually, except after restarting Kafka in 0.11.0.2
the recovery is taking a
gt; On Wed, Jun 28, 2017 at 12:24 PM, Bill Bejeck <b...@confluent.io> wrote:
>
> > Thanks for the info Vincent.
> >
> > -Bill
> >
> > On Wed, Jun 28, 2017 at 12:19 PM, Vincent Rischmann <m...@vrischmann.me>
> > wrote:
> >
> >> I'm not
, Jun 28, 2017, at 12:17 AM, Bill Bejeck wrote:
> Thanks Vincent.
>
> That's a good start for now.
>
> If you get a chance to forward some logs that would be great.
>
> -Bill
>
> On Tue, Jun 27, 2017 at 6:10 PM, Vincent Rischmann <m...@vrischmann.me>
> wrote:
Hi Vincent,
>
> Thanks for reporting this issue. Could you give us some more details
> (number topics, partitions per topic and the structure of your Kafka
> Streams application) so we attempt to reproduce and diagnose the issue?
>
> Thanks!
> Bill
>
> On 201
Hello. so I had a weird problem this afternoon. I was deploying a
streams application and wanted to delete already existing internal
states data so I ran kafka-streams-application-reset.sh to do it, as
recommended. it wasn't the first time I ran it and it had always worked
before, in staging or in
Hi,
I've been reading on log flush recommendations today and I have a
question:
Up until now I've been basing my production configuration for the log
flush on this: http://kafka.apache.org/documentation/#prodconfig
It works fine, but then I saw here
', but apparently I screwed up when creating the topics, I
should have check that first.
Thanks for your help.
2014/1/6 Jun Rao jun...@gmail.com
How many replicas do you have on that topic? What's the output of list
topic?
Thanks,
Jun
On Mon, Jan 6, 2014 at 1:45 AM, Vincent Rischmann vinc
one broker failure with 3 replicas. Do you see the following in
the controller log?
No broker in ISR is alive for ... There's potential data loss.
Thanks,
Jun
On Fri, Jan 3, 2014 at 1:23 AM, Vincent Rischmann zecmerqu...@gmail.com
wrote:
Hi all,
We have a cluster of 3 0.8 brokers
Hi all,
We have a cluster of 3 0.8 brokers, and this morning one of the broker
crashed.
It is a test broker, and we stored the logs in /tmp/kafka-logs. All topics
in use are replicated on the three brokers.
You can guess the problem, when the broker rebooted it wiped all the data
in the logs.
Hello,
I am writing a simple program in Java using the Kafka 0.8.0 jar compiled
with Scala 2.10.
I have designed my program with a singleton class which holds a map of
(consumer group, ConsumerConnector) and a map of (topic, Producer).
This singleton class provides two methods, send(topic,
Le 11/12/2013 10:34, Vincent Rischmann a écrit :
Hello,
I am writing a simple program in Java using the Kafka 0.8.0 jar
compiled with Scala 2.10.
I have designed my program with a singleton class which holds a map
of (consumer group, ConsumerConnector) and a map of (topic, Producer
the creation of the
stream.
Thanks,
Jun
On Wed, Dec 11, 2013 at 5:42 AM, Vincent Rischmann vinc...@rischmann.frwrote:
Le 11/12/2013 10:34, Vincent Rischmann a écrit :
Hello,
I am writing a simple program in Java using the Kafka 0.8.0 jar compiled
with Scala 2.10.
I have designed my
27 matches
Mail list logo