Any chance that somebody can shed some light on this?
Thanks!
On Tue, 12 Nov 2019 at 17:40, Javier Holguera
wrote:
> Hi,
>
> Looking at the Kafka Connect code, it seems that the built-in support for
> DLQ queues only works for errors related to transformations and converters
>
Hi,
Looking at the Kafka Connect code, it seems that the built-in support for
DLQ queues only works for errors related to transformations and converters
(headers, key, and value).
I wonder if it has been considered (and maybe discarded) to use the same
mechanism for the call to the
Hi,
I understand that the state store listener that can be set using
KafkaStreams.setGlobalStateRestoreListener will be invoked when the streams
app starts if it doesn't find the state locally (e.g., running on a
ephemeral docker container).
However, I wonder if the process happens as well if
essing mode" it will stay there until
> all partitions have data at the same time. That is be design.
>
> Why is this behavior problematic for your use case?
>
>
>
> -Matthias
>
>
> On 10/14/19 7:44 AM, Javier Holguera wrote:
> > Hi,
> >
> > We ha
Hi,
We have a KStream and a KTable that we are left-joining. The KTable has a
"backlog" of records that we want to consume before any of the entries in
the KStream is processed. To guarantee that, we have played with the
timestamp extraction, setting the time for those records in the "distant"
Hi,
I have a look around the "state" folder in Kafka Streams and I realised that
only WindowStore and SessionStore allows configuring a retention policy.
Looking a bit further, it seems that RocksDbSegmentedBytesStore is the main way
to implement a store that can clean itself up based on
external service within Kafka
Streams if possible. It would be better, to load the corresponding data into a
topic and read as a KTable to do a stream-table join. Not sure if this is
feasible for your use-case though.
-Matthias
On 12/28/17 7:16 AM, Javier Holguera wrote:
> Hi Matthias,
>
>
nymore.
-Matthias
On 12/27/17 6:55 AM, Javier Holguera wrote:
> Hi Matthias,
>
> Thanks for your answer. It makes a lot of sense.
>
> Just a follow-up question. KIP-62 says: "we give the client as much as
> max.poll.interval.ms to handle a batch of records, this is al
-Matthias
On 12/20/17 7:14 AM, Javier Holguera wrote:
> Hi,
>
> According to the documentation, "max.poll.interval.ms" defaults to
> Integer.MAX_VALUE for Kafka Streams since 0.10.2.1.
>
> Considering that the "max.poll.interval.ms" is:
>
> 1
Hi,
According to the documentation, "max.poll.interval.ms" defaults to
Integer.MAX_VALUE for Kafka Streams since 0.10.2.1.
Considering that the "max.poll.interval.ms" is:
1. A "processing timeout" to control an upper limit for processing a batch
of records AND
2. The rebalance timeout
the `metadata.max.age.ms` is the key here because the
shorter we set it, the quicker producers recover from leader crash.
Regards,
Javier.
--
Javier Holguera
Sent with Airmail
On 2 September 2016 at 15:50:48, Yuto KAWAMURA (kawamuray.dad...@gmail.com)
wrote:
HI Javier,
Not sure but just wondering
.
Is there something I’m missing or not understanding correctly?
Thanks for your help!
Regards,
Javier.
--
Javier Holguera
Sent with Airmail
/pingles/clj-kafka/blob/master/src/clj_kafka/offset.clj#L70)
using OffsetCommitRequest.
Any help would be welcomed.
Thanks!
--
Javier Holguera
Sent with Airmail
13 matches
Mail list logo