Hi there, I'm joining the party a little late on this one, but this is
something I encountered at work and I think I can shed some light on the
problem at hand. I filed a bug report
https://issues.apache.org/jira/browse/KAFKA-10207 and also submitted a pull
request https://github.com/apache/kafka/p
Hi guys,
I just wanted to inform you that we solved our issue, it was indeed related
to the volume switching process and some permission mess.
Thanks, everyone for the efforts on finding the root cause.
Now as a question for a possible improvement, should kafka ever admit
largestTime to be 0 in an
Hi,
What I can see from the configurations:
> log.dir = /tmp/kafka-logs (default)
> log.dirs = /var/kafkadata/data01/data
>From the documentation log.dir is only used if log.dirs is not set, so
*/var/kafkadata/data01/data
*is the folder used for logs.
Regards
Em ter., 5 de mai. de 2020 às 08:5
Hi guys, still following your discussion even if it's out of my reach.
Just been noticing that you use /tmp/ for your logs, dunno if it's a good
idea :o https://issues.apache.org/jira/browse/KAFKA-3925
Le lun. 4 mai 2020 à 19:40, JP MB a écrit :
> Here are the startup logs from a deployment wher
Here are the startup logs from a deployment where we lost 15 messages in
topic-p:
https://gist.github.com/josebrandao13/81271140e59e28eda7aaa777d2d3b02c
.timeindex files state before the deployment:
*Partitions with messages: timestamp mismatch
*Partitions without messages: permission denied
.tim
Hi guys,
I'm gonna get back to this today, I get mixed feelings regarding the
volumes being the cause. This volume switching is around for quite some
time, in a lot of clusters, and we only started noticing this problem when
we updated some of them. Also, this only happens in *a few* of those
.tim
Good luck JP, do try it with the volume switching commented out, and see
how it goes.
On Fri, May 1, 2020 at 6:50 PM JP MB wrote:
> Thank you very much for the help anyway.
>
> Best regards
>
> On Fri, May 1, 2020, 00:54 Liam Clarke-Hutchinson <
> liam.cla...@adscale.co.nz>
> wrote:
>
> > So the
Yes, that's a clean shutdown log, with few exceptions that are expected
(connected clients get disconnected during shutdown). Add fsync after kafka
shutdown should force OS to flush buffers to disk. I somehow suspect there is
some problems during unmounting/mounting disks.However I don't know
Thank you very much for the help anyway.
Best regards
On Fri, May 1, 2020, 00:54 Liam Clarke-Hutchinson
wrote:
> So the logs show a healthy shutdown, so we can eliminate that as an issue.
> I would look next at the volume management during a rollout based on the
> other error messages you had e
So the logs show a healthy shutdown, so we can eliminate that as an issue.
I would look next at the volume management during a rollout based on the
other error messages you had earlier about permission denied etc. It's
possible there's some journalled but not flushed changes in those time
indexes,
I took a bit because I needed logs of the server shutting down when this
occurs. Here they are, I can see some errors:
https://gist.github.com/josebrandao13/e8b82469d3e9ad91fbf38cf139b5a726
Regarding systemd, the closest I could find to TimeoutStopSec was
DefaultTimeoutStopUSec=1min 30s that looks
I'd also suggest eyeballing your systemd conf to verify that someone hasn't
set a very low TimeoutStopSec, or that KillSignal/RestartKillSignal haven't
been configured to SIGKILL (confusingly named, imo, as the default for
KillSignal is SIGTERM).
Also, the Kafka broker logs at shutdown look very d
Hi,
It's quite a complex script generated with ansible where we use a/b
deployment and honestly, I don't have full knowledge on it I can share the
general guidelines of what is done:
> - Any old volumes (from previous releases are removed) (named with suffix
> '-old')
> - Detach the volumes attach
Hi,
It does look as index corruption... Can you post script that stops kafka?
On Wednesday, April 29, 2020, 06:38:18 PM GMT+2, JP MB
wrote:
>
> Can you try using the console consumer to display messages/keys and
> timestamps ?
> --property print.key=true --property print.timestamp=tru
>
> Can you try using the console consumer to display messages/keys and
> timestamps ?
> --property print.key=true --property print.timestamp=true
There are a lot off messages so I'm picking an example without and with
timeindex entry. All of them have a null key:
Offset 57 CreateTime:1588074808
Hmm, how are you doing your rolling deploys?
I'm wondering if the time indexes are being corrupted by unclean
shutdowns. I've
been reading code and the only path I could find that led to a largest
timestamp of 0 was, as you've discovered, where there was no time index.
WRT to the corruption - th
Can you try using the console consumer to display messages/keys and
timestamps ?
--property print.key=true --property print.timestamp=true
Le mer. 29 avr. 2020 à 13:23, JP MB a écrit :
> The server is in UTC, [2020-04-27 10:36:40,386] was actually my time. On
> the server was 9:36.
> It doesn't
The server is in UTC, [2020-04-27 10:36:40,386] was actually my time. On
the server was 9:36.
It doesn't look like a timezone problem because it cleans properly other
records, exactly 48 hours.
Em qua., 29 de abr. de 2020 às 11:26, Goran Sliskovic
escreveu:
> Hi,
> When lastModifiedTime on that
We are using the console produce, directly on the machines where we are
experiencing the problem. I just inserted 150 messages in a topic and chose
the partition with more messages to make this analysis, in this case,
partition 15 in broker 1.
The log file:
> kafka-run-class.sh kafka.tools.DumpLo
Hi,
When lastModifiedTime on that segment is converted to human readable time:
Monday, April 27, 2020 9:14:19 AM UTC
In what time zone is the server (IOW: [2020-04-27 10:36:40,386] from the log is
in what time zone)?
It looks as largestTime is property of log record and 0 means the log record i
isting on this but does anyone have an idea of how that
> largestTime can be 0 ?
>
> Regards
>
> -- Forwarded message -
> De: JP MB
> Date: ter., 28 de abr. de 2020 às 15:36
> Subject: Kafka: Messages disappearing from topics, largestTime=0
> To:
>
>
Hi,
Sorry guys for insisting on this but does anyone have an idea of how that
largestTime can be 0 ?
Regards
-- Forwarded message -
De: JP MB
Date: ter., 28 de abr. de 2020 às 15:36
Subject: Kafka: Messages disappearing from topics, largestTime=0
To:
Hi,
We have messages
Hi,
We have messages disappearing from topics on Apache Kafka with versions
2.3, 2.4.0, 2.4.1 and 2.5.0. We noticed this when we make a rolling
deployment of our clusters and unfortunately it doesn't happen every time,
so it's very inconsistent.
Sometimes we lose all messages inside a topic, other
23 matches
Mail list logo