09 PM, Marina Popova
wrote:
> Thank you, Matthias, for the ideas to verify next!
>
> Here is what I see:
>
> Topic 1 - that is not being cleaned up for 3 days already, but has retention
> set to 4 hrs: (I've truncated the payload but left the important details):
>
&g
ur record timestamps as well as broker clocks?
>
> If you write "future data", ie, data with timestamps larger than broker
> wall-clock time, it might stay in the topic longer than retention time.
>
> If you write "very old data", ie, data with timestamps smaller
the retention is done when segments closed ,
>> per partition
>>
>> בתאריך יום ג׳, 25 במאי 2021, 22:59, מאת Ran Lupovich
>> :
>>
>>> Have you checked the segment size? Did you decribe the topic
>>> configuration?maybe you created it with some settings yo
could check, apart from what I already did - see the
post below - to troubleshoot this?
thank you!
Marina
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, May 20, 2021 2:10 PM, Marina Popova
wrote:
> Hi, I have posted this question on SO:
>
Hi, I have posted this question on SO:
https://stackoverflow.com/questions/67625641/kafka-segments-are-deleted-too-often-or-not-at-all
but wanted to re-post here as well in case someone spots the issue right away
Thank you for your help!
>
We have two topics on our Kafka cluster that ex
The way I see it - you can only do a few things - if you are sure there is no
room for perf optimization of the processing itself :
1. speed up your processing per consumer thread: which you already tried by
splitting your logic into a 2-step pipeline instead of 1-step, and delegating
the work o
Hi, All!
After we've upgraded Kafka brokers (and clients) to 2.5.1, from 0.11, we've
noticed a significant increase in the off-heap (and heap too, but not as
drastic) usage of our Kafka producers (Java), with no code changes in the
producer at all.
Any idea what [default] config settings might
Hi,
Since Java 10 and openJdk 8u212 - there was a container-awareness feature
added... Basically it makes JVM recognise resource allocation of the container
it is running in (if that's the case).
We ran into an issue with ElasticSearch - since ES is utilizing this feature
and creates internal t
6. Also, Kafka 2.5.1 should be out
>> in a few days and I'd recommend that over 2.4.1.
>>
>> Ismael
>>
>> On Sun, Aug 2, 2020 at 10:04 AM Marina Popova
>> wrote:
>>
>>> Actually, I'm very interested in your experience as well I'
Actually, I'm very interested in your experience as well I'm about to start
the same (similar) upgrade - from Kafka 0.11/ZK3.4.13 to Kafka 2.4/ZK 3.5.6
I have Kafka and ZK as separate clusters.
My plan is :
1. rolling upgrade the Kafka cluster to 2.4 - using the
inter.broker.protocol.vers
I'm also very interested in this question - any update on this?
thanks!
Marina
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, September 5, 2019 6:30 PM, Ash G wrote:
> _consumer_offsets is becoming rather big > 1 TB. Is there a way to purge
> dead/inactive c
Hi, one more question on KSQL CLI:
using "run script /myScript.ksql" commands I can run scripts with "CREATE
STREAM ..." and "CREATE TABLE ..." commands - no problem.
However, when I try to run a script with a SELECT statement, I'm getting an
error:
Script: top3_reqtime.ksql
SELECT payload->sit
Hi!
I'm running the same query from KSQL CLI and from the Confluent Control Center
(KSQL Development).
I set 'auto.offset.reset' = 'latest';
to only use latest events.
Once I send 20 test events - I get full (correct) results from this query in
KSQL CLI - but only partial results in the Control
set 'auto.offset.reset'='earliest';
>>
>> Thanks,
>>
>> Steve
>>
>> On Mon, May 6, 2019 at 4:40 PM Marina Popova
>> wrote:
>>
>>> Hi,
>>> Trying to understand if I'm missing something...
>>> I&
Hi,
Trying to understand if I'm missing something...
I'm using Confluent KSQL setup (running all components in Docker containers).
I've created a Table from a Stream, as following:
CREATE TABLE rc_ip_key_agg_ptable AS
SELECT remote_addr,
windowStart(),
windowEnd(),
count(*)
FROM
Sorry, maybe a stupid question, but:
I see that Kafka 1.0.1 RC2 is still not released, but now 1.1.0 RC0 is coming
up...
Does it mean 1.0.1 will be abandoned and we should be looking forward to 1.1.0
instead?
thanks!
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Fe
Hi,
I don't think it would be such a great idea to start modifying the very
foundation of Kafka's design to accommodate more and more extra use cases.
Kafka because so widely adopted and popular because its creator made a
brilliant decision to make it "dumb broker - smart consumer" type of the
s
h [ProtonMail](https://protonmail.com) Secure Email.
> Original Message
> Subject: Re: Kafka FileStreamSinkConnector handling of bad messages
> Local Time: October 18, 2017 5:36 PM
> UTC Time: October 18, 2017 9:36 PM
> From: dhawan.gajend...@datavisor.com
> To: users@kafka.ap
Hi,
I wanted to give this question a second try as I feel it is very important
to understand how to control error cases with Connectors.
Any advice on how to control handling of "poison" messages in case of
connectors?
Thanks!
Marina
> Hi,
> I have the FileStreamSinkConnector working perfec
Hi,
I have the FileStreamSinkConnector working perfectly fine in a distributed mode
when only good messages are being sent to the input event topic.
However, if I send a message that is bad - for example, not in a correct JSON
format, and I am using the Json converter for keys/values as followin
AM
> UTC Time: September 29, 2017 3:49 PM
> From: sjdur...@gmail.com
> To: users@kafka.apache.org, Marina Popova
>
> You can choose to run just kafka connect in the confluent platform (just run
> the kafka connect shell script(s)) and configure the connectors to point
> towards you
Message
> Subject: Re: how to use Confluent connector with Apache Kafka
> Local Time: September 29, 2017 10:50 AM
> UTC Time: September 29, 2017 2:50 PM
> From: sjdur...@gmail.com
> To: users@kafka.apache.org, Marina Popova
>
> The confluent platform download has
connectors are compatible with vanilla AK, as Confluent Open Source
> ships with "plain" Apache Kafka under the hood.
>
> So you can just download the connector, plug it in, and configure it as
> any other connector, too.
>
> https://www.confluent.io/product/connector
Hi,
we have an existing Kafka cluster (0.10) already setup and working in
production.
I woudl like to explore using Confluent's Elasticsearch Connector - however, I
see it comes as part of the Confluent distribution of Kafka (with separate
confluent scripts, libs, etc.).
Is there an easy way to
Hi,
This issue was reported earlier in this post , for Kafka 0.9:
https://stackoverflow.com/questions/41177614/kafka-0-9-java-consumer-skipping-offsets-during-application-restart
However, I see the same issue with the Kafka 0.10 as well In summary:
1. we start our consumer app, and in the eve
25 matches
Mail list logo