Hello Xin, hello Jan,
worked perfectly. I did a build of an image based on 0.11.0.1 and
applied the missing patch, cleaning went through and resulted in the
expected size.
Thanks a lot for the quick help,
Elmar
On 10/25/2017 1:03 PM, Xin Li wrote:
Hey Elmar,
The only thing you need to do is upgrade,
Kafka track cleaned offset using cleaner-offset-checkpoint file.
Best,
Xin
________________________________________
Xin Li Data EngineeringXin.Li@ <mailto:xin...@xin.li>trivago.com
<mailto:y...@trivago.com>www.trivago.com <http://www.trivago.com/>F +49 (0) 211 540
65 115We're hiring! Check out our vacancies http://company.trivago.com/jobs/Court of
registration: Amtsgericht Düsseldorf, HRB 51842
Managing directors: Rolf Schrömgens · Malte Siewert · Peter Vinnemeier · Andrej
Lehnert · Johannes Thomas
trivago GmbH · Bennigsen-Platz 1 · D – 40474 Düsseldorf
* This email message may contain legally privileged and/or confidential
information.
You are hereby notified that any disclosure, copying, distribution, or use of
this email message is strictly prohibited.
On 25.10.17, 12:34, "Elmar Weber" <i...@elmarweber.org> wrote:
Hi,
thanks, I'll give it a try, we run on Kubernetes so it's not a big issue
to replicate the whole env including data.
One question I'd have left:
- How can I force a re-compaction over the whole topic? Because I guess
the Log Cleaner market everything so far as not able to clean, how
will it recheck the whole log?
Best,
Elmar
On 10/25/2017 12:29 PM, Jan Filipiak wrote:
> Hi,
>
> unfortunatly there is nothing trivial you could do here.
> Without upgrading your kafkas you can only bounce the partition back and
> forth
> between brokers so they compact while its still small.
>
> With upgrading you could also just cherrypick this very commit or put a
> logstatement to verify.
>
> Given the Logsizes your dealing with, I am very confident that this is
> your issue.
>
> Best Jan
>
>
> On 25.10.2017 12:21, Elmar Weber wrote:
>> Hi,
>>
>> On 10/25/2017 12:15 PM, Xin Li wrote:
>> > I think that is a bug, and should be fixed in this task
>> https://issues.apache.org/jira/browse/KAFKA-6030.
>> > We experience that in our kafka cluster, we just check out the
>> 11.0.2 version, build it ourselves.
>>
>> thanks for the hint, as it looks like a calculation issue, would it be
>> possible to verify this by manually changing the clean ratio or some
>> other settings?
>>
>> Best,
>> Elmar
>>
>
>