Hi Akash, if your connector doesn't have the appropriate permissions for a
topic then it can't run - so I'm not sure what value there is in trying to
handle or tolerate that kind of exception. If someone is changing ACLs in
such a way that it breaks a connector, then you probably want that
connect
For sink connectors, I believe you can scale up the tasks to match the
partitions on the topic. But I don't believe this is the case for source
connectors; the number of partitions on the topic you're producing to has
nothing to do with the number of connector tasks. It really depends on the
indi
" I couldn't find any doc by kafka to enable Digest-MD5 authentication."
This was the 2nd result in a google search:
https://docs.confluent.io/platform/current/security/zk-security.html
" I don't want to enable SASL."
Digest-MD5 is SASL authentication, so not sure what you mean here.
" If I set z
Sounds like the parallel consumer is what you want:
https://github.com/confluentinc/parallel-consumer
Or at least decide why that won't work before writing something yourself.
- alex
On Tue, Oct 3, 2023 at 4:20 PM Sree Sanjeev Kandasamy Gokulrajan <
sreesanjee...@gmail.com> wrote:
> Hey Neeraj
Not necessarily, depends on how your producer is configured. As an
example, if your producer were configured with acks=all and min.isr = 2
(which is a pretty typical config), then you wouldn't be able to produce
messages since the brokers would be unable to replicate the data to other
broker(s) in
Andrew - your example would give him a replication factor of 2 though, and
it sounds like he wants 3 unless I missed something. So add an additional
broker id to each of the replicas arrays in your example and you'd have an
RF of 3.
-alex
On Sat, Dec 17, 2022 at 7:12 AM Andrew Grant
wrote:
> H
I don't think the Confluent self-balancing feature works if you have your
broker data in multiple directories anyway - it's expecting a single dir
per broker and will try and keep the data balanced between brokers. Also
just as an aside, I'm not sure there's much value in using multiple
directorie
Marisa, you might consider engaging someone at Confluent, maybe they can
give you some case studies or whitepapers from similar use-cases in the
financial industry. (and yes, Kafka is used in the financial industry) . A
client asking you to "prove that Kafka performs/scales" seems like an
unusual
1. The aggregation is done based on the key to the message. So for a
silly example, if your messages were data about new car sales and you
wanted to count how many cars sold by color, you could consume the messages
and then "re-key" them so that the key to the message was the color. Then
later i
saction has failed, so kafka guarantees either all
> > or
> > >>> none, means offset written to source topic, state written to state
> > store
> > >>> topic, output produced on destination topic... all of these happen or
> > >> none
> >
tted to source topic and output
> > not produced on destination topic, but you redis state store is
> > inconsistent with that since it is external state store and kafka can't
> > guarantee rollback ot state written there
> >
> > On Mon, Mar 15, 2021 at 6:30
" Another issue with 3rd party state stores could be violation of
exactly-once guarantee provided by kafka streams in the event of a failure
of streams application instance"
I've heard this before but would love to know more about how a custom state
store would be at any greater risk than RocksDB
I don't think he's asking about data-loss, but rather data consistency.
(in the event of an exception or application crash, will EOS ensure that
the state store data is consistent) My understanding is that it DOES apply
to state stores as well, in the sense that a failure during processing
would m
Hi all, I've been experimenting a bit with ways of increasing the
throughput of some kafka streams applications that have "exactly_once"
enabled. (these applications consume from a topic, transform the data, and
do 2 different aggregations before finally writing the output to a sink
topic) Increasi
would not have been
committed, which means it would get processed again on application
startup. Hope that helps,
Alex Craig
On Tue, May 19, 2020 at 10:21 AM Raffaele Esposito
wrote:
> This is the topology of a simple word count:
>
> Topologies:
>Sub-topology: 0
> Source: KST
ession.timeout.ms and
> heartbeat.interval.ms ?
>
> If anyone else has any ideas, please jump in.
>
> Thanks,
> -John
>
> On Fri, Apr 10, 2020, at 14:55, Alex Craig wrote:
> > Thanks John, I double-checked my configs and I've actually got the
> > max.poll
d to the consumer configuration portions to target
> the main or restore consumer.
>
> Also worth noting, we’re planning to change this up pretty soon, so that
> restoration happens in a separate thread and doesn’t block polling like
> this.
>
> I hope this helps!
> -John
&
Hi all, I’ve got a Kafka Streams application running in a Kubernetes
environment. The topology on this application has 2 aggregations (and
therefore 2 Ktables), both of which can get fairly large – the first is
around 200GB and the second around 500GB. As with any K8s platform, pods
can occasiona
18 matches
Mail list logo