Hi, you need to increase record and message size because your real message
payload is bigger than what’s mention in properties file.
Regards,
On Fri, 1 Jul 2022 at 20:24, Divya Jain
wrote:
> Hi,
>
> I am facing this issue:
> 2022-07-01 19:01:05,548] INFO Topic 'postgres.public.content_history'
Hello,
Is it currently possible to use a single endpoint for advertised.listeners,
which is in front of all my brokers? the flow for example
broker1-->| V
broker2-->| I
broker3-->| P
I was under the impression that I can reach any broker I wanted (assuming
that broker is
1. What is it that you’ve tried ?
2. What config changes have you made?
3. What do you expect to see?
On Fri, 25 Jun 2021 at 09:22, Anjali Sharma
wrote:
> Hii All,
>
>
> Can you please help with this?
>
> While trying for mtls ssl.client.aut=required, server side in certificate
> request the DN
Kafka doesn’t have a rest proxy. Confluent does.
Also, Instaclustr offers a Kafka REST proxy.
Also, SAP has 100s of products including SAP Cloud Platform. So not sure
what this PI/PO means for your case.
Unless there’s something I’m unaware of, you referring to non-Apache
offering here. You might
Hey Upendra,
On Mon, 1 Feb 2021 at 05:32, Upendra Yadav wrote:
> Hi,
>
> I want to know the polling behaviour when a consumer is assigned with
> multiple topic-partitions.
>
> 1. In a single poll will it get messages from multiple topic-partitions. Or
> in one poll only one topic-partition's mes
date = new Date(record.timestamp());
> DateFormat formatter = new SimpleDateFormat("HH:mm:ss.SSS");
> formatter.setTimeZone(TimeZone.getTimeZone("UTC"));
> String dateFormatted = formatter.format(date);
> System.out.println("Received me
Hello,
We know that using KafkaConsumer api we can replay messages from certain
offsets. However, we are not sure if we could specify timeStamp from which
we could replay messages again.
Does anyone know if this is possible?
Regards,
Hello,
We have question regarding transactional producer states and disk space
usage.
We did a quick and dirty test recently with 3 Simple Java client producers
writing to compressed and topics with compression.type set for both
producers and topic correctly. We performed two rounds of tests (wit
We are using official Apache Kafka stuff. Nothing different.
On Fri, 18 Sep 2020 at 03:30, Shaohan Yin wrote:
> Hi,
>
>
>
> Are you using the official Java client or any third party clients?
>
>
>
> On Thu, 17 Sep 2020 at 22:53, M. Manna wrote:
>
>
>
>
f your broker?
>
> From the client of 2.5.1 I didn't see any changes that could be made to the
> client compression.
> Maybe you could check if the compression.type is set on the topic level or
> the broker side?
>
> Cheers
>
> On Thu, 17 Sep 2020 at 20:19, M
Hello,
I am trying to understand the compression.type settings and the growth
impact from this. With a compression type (valid) settings, this ensures
that the message in the topic stays compressed and has to be decompressed
by the consumer.
However, it may be possible that the compression will n
Hello,
We do appreciate that release 2.7 is keeping us occupied, but this bug (or
not) is holding us back from making some design changes/improvements. It'd
be awesome if anyone could take a look and perhaps either, rule it out with
explanation or acknowledge the changes.
Regards,
Hello,
I understand that a consumer with Txn Isolation level set as
"read_committed" will read till Last Stable Offset (LSO) and will not be
given any messages for aborted Txn. LSO also includes non-transactional
messages.
We have seen a behaviour where LSO didn't have any data and we had to seek
Also, I realised that endOffsets-100 was a debugging step for me. I
originally had endOffset - 1. and that was polling for ever. Hence, my
previous comment on LSO, high watermark, and open transactions.
On Sat, 5 Sep 2020 at 12:08, M. Manna wrote:
>
> Firstly, Thanks very much. I think
fy where you what to consume on topic
>
> when you seek to somewhere. next time poll is just send request data at
> where you seek.
>
>
>
>
>
> if
>
> seek 1
>
> seek 20
>
>
>
>
>
> then
>
> poll will start from 20. because the latest
Hello,
During some tests, I can see that consumer.endOffsets() returns some valid
value, but cannot be polled definitely if seek() is called on that
consumer. In other words, poll hangs almost forever.
My intention is to start consuming from the last successfully written data
on that TopicPartiti
Hello,
I tried to find this information, but may be I searched for the wrong stuff.
I am trying to identify what's the last message written on a
TopicPartition. My constraints are:
1) No Knowledge of the last offset - so I cannot use seek(TopicPartition,
long)
2) I have to retrieve the last-writ
Hi,
AFAIK, ZK is packed with Kafka. So if you upgrade to 2.4.1 you’ll get what
is in 2.4.1.
It’s a little different however, if you’re hosting ZK in a different host
running independently of Kafka.
What’s your situation ?
On Thu, 23 Jul 2020 at 21:02, Andrey Klochkov wrote:
> Hello,
> We're
Hello,
Apache Kafka has both Apache 2.0 licence (a fine text at the bottom of
Kafka website), and Confluent licence (From Confluent.io).
Depending on what you use, you can get it from the appropriate site.
I hope this helps ?
Regards,
On Wed, 1 Jul 2020 at 18:10, Theresa Sowinski <
reese.sowin
Hey Vinicius,
On Tue, 26 May 2020 at 10:27, Vinicius Scheidegger <
vinicius.scheideg...@gmail.com> wrote:
> In a scenario with multiple independent producers (imagine ephemeral
> dockers, that do not know the state of each other), what should be the
> approach for the messages being sent to be e
Hey Xie,
On Fri, 22 May 2020 at 08:31, Jiamei Xie wrote:
> Hi
>
> Kill all zookeeper and kafka process. Clear zookeeper and kafka data dir.
> Restart zookeeper and kafka. If there are any active client. Topic used by
> client will be auto-created.
>
> How to reproduce?
>
>
> 1. Start zookeep
Hello,
I am quite new to KSQL, so apologise for misunderstanding it's concept.
I have a list of topics that I want to search data for. I am not using
stream process, but plain topics which has data retained for 14 days. All i
want to do is search for data in SQL-like way as long as it's within th
I have done this before. What Matthias said below is correct.
First you’ve got to stop all apps to prevent data consumption (if that’s
what you also mean by having downtime)
Then, you can go ahead and replace the bin.
Regards,
On Tue, 12 May 2020 at 18:33, Matthias J. Sax wrote:
> I guess you
I agree with Steve.
Also, it’s worth reading Jay’s PR last year regarding confluent community
licence.
Regards,
On Sat, 9 May 2020 at 16:14, Steven Miller wrote:
> At the risk of starting a uh-huh-uhnt-uh battle, I would have to disagree.
> There are seriously good people at Confluent, many of
Hey LeiWang,
On Sat, 9 May 2020 at 09:46, wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:
>
> I want to know if there's any difference between apache kafka and the open
> sourced confluent kafka ?
>
>
> Thanks,
> Lei
If you visit confluent website, it’s pretty well summarised using
Hey Prasad (#StayAtHomeSaveLives),
On Thu, 26 Mar 2020 at 11:19, Prasad Suhas Shembekar <
ps00516...@techmahindra.com> wrote:
> Hi,
>
> I am using Apache Kafka as a Message Broker in our application. The
> producers and consumers are running as Docker containers in Kubernetes.
> Right now, the pr
don't work for
Confluent :), so a disclaimer in advance).
There is also an upcoming webinar on how Kafka is integrated in your
application/architecture.
I hope it helps.
Regards,
M. MAnna
On Thu, 12 Mar 2020 at 00:51, 张祥 wrote:
> Thanks, very helpful !
>
> Peter Bukowinski 于20
Hi James,
3 Consumers in a group means you are having 20 partitions per consumer (as
per your 60 partition and 1 CGroup setup), 5 means 12. There's nothing
special about these numbers as you also noticed.
Have you tried setting fetch.max.wait.ms = 0 and see whether that's making
a difference for y
he steps correctly. See my
highlighted text above
>
>1.
>
>
> Here when you turn on node 2 in step 4, I would like to have my cluster
> up, since one of the broker is up. But it is not happening.
> --
> *From:* M. Manna
> *Sent:* 13 February 2
.state.log.min.isr was 2.
> This warning also leads to failure from the producer side to put data on
> the topic.
>
>
> Are there any other things I have to check?
>
>
> Thanks
>
>
> --
> *From:* M. Manna
> *Sent:* 13 February
This could be because you have set your transaction.ate.log.min.isr=2. Have
you tried with setting this to 1?
Also, please note that if your min.insync.replica=1, and you only have 2
nodes, you would only have a guarantee from 1 brokers to have the messages
- but if the same broker fails then you
Apologies but we think we've found the information here (Jetty based admin
server)
http://kafka.apache.org/documentation/#Additional+Upgrade+Notes
Sorry for spamming.
Regards,
On Thu, 6 Feb 2020 at 12:31, M. Manna wrote:
> Hey all,
>
> We have a test ecosystem whic
Hey all,
We have a test ecosystem which uses two app instances with port 80 and 8080
engaged. Since, Kafka 2.4.0 we are always having issues since Zookeeper is
using port 8080.
We are not sure why this is the case since port 8080 usage was never there
before 2.4.0. Before we dig into the code, co
r all future comms.
Regards,
M. Manna
Hey Tim
On Fri, 31 Jan 2020 at 13:06, Sullivan, Tim
wrote:
>
>
> Is there a way I can proactively check my consumers to see if
> they are consuming? Periodically some or all of my consumers stop
> consuming. The only way I am made aware of this is when my down stream
> feeds folks al
Hey Upendra,
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools
The above should guide you through the reassignment of partitions/replicas.
Also, you should read about
offset.topic.num.partitions
offset.topic.replication.factor
I hope this helps you.
Regards,
On Thu, 30 Ja
Hey Buks,
On Mon, 27 Jan 2020 at 07:51, wrote:
>
>
> Hi, I would appreciate any help on this.
>
> Thanks a stack!
>
> Buks
>
>
>
>
> org.apache.kafka
> kafka-clients
> ${kafka.version}
>
>
>
>
> 2.3.1
>
>
>
>
> org.springframework.boot
> spring-boot-starter-parent
> 2.
Pushkar,
On Sat, 25 Jan 2020 at 11:19, Pushkar Deole wrote:
> Thank you for a quick response.
>
> What would happen if I set the producer acks to be 'one' and
> min.insync.replicas to 2. In this case the producer will return when only
> leader received the message but will not wait for other rep
Thanks Robin - looks nice!
On Thu, 23 Jan 2020 at 09:36, Robin Moffatt wrote:
> There's a good presentation from Stephane Maarek that covers tooling,
> including UIs:
>
> https://www.confluent.io/kafka-summit-lon19/show-me-kafka-tools-increase-productivity/
>
> You'll find some projects that nee
It depends.
Kafka-webview is excellent for managing messages etc. I use it for our
preprod monitoring.
LinkedIn CruiseControl is a defacto for managing performance related
thresholds (goals)
LinkedIn burrow is good for consumer lag monitoring.
And all the above are free.
Regards,
On Thu, 23 J
Hey all,
I meant to do this a while back, so apologies for the delay.
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=145722808
The above has a working instruction on how to start debugger in Eclipse
Scala. I have't proofread the text, but the solution works correctly. If
anyone
cit serialization / deserialization,
> communication is transparent to me.
> I think I have to set a kafka properties for this problem, but I don't
> know what it can be.
> Thank you
>
> -Messaggio originale-
> Da: M. Manna
> Inviato: venerdì 17 gennaio 2020 13:25
Hi,
On Fri, 17 Jan 2020 at 11:18, Marco Di Falco
wrote:
> Hello guys!
> I have a producer and consumer running in a windows shell.
> I write this message ‘questo è un test’ and in consumer receive this:
> “questo ´┐¢ un test” .
>
> What properties should I use to set up character coding in utf-
ave been committed
> to the storage media by the fsync() call.
>
At this stage the ISRs (this includes leader) are all acknowledged.
>
> If the answer is yes, it looks good from here. If the answer is no, then
> what else does the application need to do?
>
> Sincerely,
> An
ps.
Regards,
>
> Sincerely,
> Anindya Haldar
> Oracle Responsys
>
>
> > On Jan 15, 2020, at 8:55 AM, M. Manna wrote:
> >
> > Anindya,
> >
> > On Wed, 15 Jan 2020 at 16:49, Anindya Haldar
> > wrote:
> >
> >> In our case, the mi
Anindya,
On Wed, 15 Jan 2020 at 16:49, Anindya Haldar
wrote:
> In our case, the minimum in-sync replicas is set to 2.
>
> Given that, what will be expected behavior for the scenario I outlined?
>
This means you will get confirmation when 2 of them have acknowledged. so
you will always have 2 i
for clarifying. I shall continue my endeavour to learn other
things. Apart from Confluent and ASF examples, do you recommend anything
else for starters ?
Regards,
Hope it answers some of your questions.
>
> Thanks
> Sachin
>
>
>
> On Mon, Jan 13, 2020 at 1:32 AM M. Manna wrote
Hello,
Even though I have been using Kafka for a while, it's primarily for
publish/subscribe event messaging ( and I understand them reasonably well).
But I would like to do more regarding streams.
For my initiative, I have been going through the code written in "examples"
folder. I would like to
Priyanka,
On Wed, 8 Jan 2020 at 20:42, cool girl wrote:
> Hi ,
>
> I am trying to learn Kafka. Is there any free API which I can use like
> twitter? I created twitter account but looks like ti will take days before
> I can use their streaming data .
>
Welcome to Kafka. If you are seeking a RE
Hey Tavares,
On Wed, 8 Jan 2020 at 09:38, Tom Bentley wrote:
> Tavares, if you're asking about the consumer then I think you might have a
> misconception about how it works: The application calls poll() to fetch the
> latest records from the broker(s). The broker is not pushing records into
> s
edf2ee7ab@%3Cdev.kafka.apache.org%3E
>
> You will see I expressed concerns early on that "the proposal still
> requires separate processes with separate configuration".
>
> Ryanne
>
> On Thu, Jan 2, 2020 at 9:45 AM M. Manna wrote:
>
> > Hello,
> >
> >
Hi,
On Fri, 3 Jan 2020 at 19:48, Clark Sims wrote:
> Why do some people so strongly recommend cutting large messages into
> many small messages, as opposed to changing max.message.bytes?
>
> For example, Stéphane Maarek at
> https://www.quora.com/How-do-I-send-Large-messages-80-MB-in-Kafkam,
>
Hello,
Greetings of the New Year to everybody. Sorry for reviving this randomly,
as I didn't have the original thread anymore.
I was reading through this KIP and trying to following the current vs
proposed diagrams. Once again, apologies for making mistakes in
understanding this.
Are we replacin
+1 with what John mentioned.
Master is more like a template that gets created for new repo. It’s not in
use for any Kafka activities (not that we know of).
Regards,
On Wed, 25 Dec 2019 at 17:04, John Roesler wrote:
> Hi Sachin,
>
> Trunk is the basis for development. I’m not sure what master
This is really helpful. Thanks for sharing this with the community.
On Sat, 21 Dec 2019 at 19:29, Alex Woolford wrote:
> Not sure if this is helpful, Tim.
>
> I recently recorded a video that shows the gist of monitoring Kafka with
> Prometheus: https://www.youtube.com/watch?v=nk3sk1LO7Bo
>
>
Jai,
On Tue, 17 Dec 2019 at 17:33, Jai Nanda Kumar
wrote:
> Hi,
>
> How to perform a health check on a running Kafka server in AWS EC2
> server.
>
SHouldn't this be part of your liveness probe? Or, are you trying to do
this adhoc (not how folks do it anyway)?
>
> Thanks and Regards,
> A.
Robin,
On Tue, 17 Dec 2019 at 01:58, Yu Watanabe wrote:
> Robin.
>
> Thank you for the reply.
>
> I am about to run kafka on docker in development environment for first time
> and also in production.
> To get started, I searched images in docker hub that has "Official Images"
> tag to find if th
Frank,
On Thu, 12 Dec 2019 at 11:28, Frank Zhou wrote:
> Hi,
>
> I am testing kafka client on message batch and compression. I have enabled
> message batching along with compression, with batch.size set to 3M,
> linger.ms set to 5000ms and compression.type set to gzip(Attached whole
> producer c
Hi,
On Mon, 2 Dec 2019 at 14:59, Rodoljub Radivojevic <
rodoljub.radivoje...@instana.com> wrote:
> Hi everyone,
>
> I want to calculate the total amount of data per broker (sum of sizes of
> all partitions on the broker).
> How can I do that using existing metrics?
>
>
Why would require a kafka
Rodoljub,
On Mon, 2 Dec 2019 at 14:52, Rodoljub Radivojevic <
rodoljub.radivoje...@instana.com> wrote:
> Hello,
>
> Is it possible to calculate the number of partitions for which one broker
> is a leader, using existing Kafka metrics?
>
> Regards,
> Rodoljub
>
Does the below answer your question
Hi Tom,
On Mon, 2 Dec 2019 at 09:41, Thomas Aley wrote:
> Hi Kafka community,
>
> I am hoping to get some feedback and thoughts about broker interceptors.
>
> KIP-42 Added Producer and Consumer interceptors which have provided Kafka
> users the ability to collect client side metrics and trace th
Hi both,
It’s been going around for a long time, but Kafka is officially not fully
tested and verified for Windows. The disclaimer is on the official site.
Windows servers are easy choice because a lot of infrastructures are on
Windows and a lot of businesses are dependent on those infrastructure.
Hi
On Fri, 29 Nov 2019 at 11:11, Roberts Roth
wrote:
> Hello
>
> I was confused, for realtime streams, shall we use kafka or samza?
>
> We have deployed kafka cluster with large scale in production
> environment.Shall we re-use kafka's streaming feature, or deploy new
> cluster of samza?
>
> Tha
Hi,
Is there any reason why you haven’t performed the upgrade based on official
docs ? Or, is this something you’re planning to do now?
Thanks,
On Tue, 19 Nov 2019 at 19:52, Daniyar Kulakhmetov
wrote:
> Hi Kafka users,
>
> We updated our Kafka cluster from 1.1.0 version to 2.3.1.
> Message for
oker builds all the data files themselves
than referring to previously stored files.
Try that and see how it goes.
Thanks,
>From your answer I guess the preferred way is having a replication of 3?
>
>
> -----Ursprüngliche Nachricht-
> Von: M. Manna
> Gesendet: Samstag, 16.
Hi,
On Sat, 16 Nov 2019 at 19:21, Oliver Eckle wrote:
> Hello,
>
>
>
> having a Kafka Cluster running in Kubernetes with 3 Brokers and all
> replikations (topic, offsets) set to 2.
This sounds strange. You have 3 brokers and replication set to 2. Is this
intentional ?
>
> For whatever reason
able to do this
comfortably.
Thanks,
>
>
>
> On Wed, Nov 13, 2019 at 6:23 PM M. Manna wrote:
>
> > On Wed, 13 Nov 2019 at 12:41, Ashutosh singh wrote:
> >
> > > Hi,
> > >
> > > All of a sudden I see under replicated partition in our Kafka cluster
On Wed, 13 Nov 2019 at 12:41, Ashutosh singh wrote:
> Hi,
>
> All of a sudden I see under replicated partition in our Kafka cluster and
> it is not getting replicated. It seems it is getting stuck somewhere. In
> sync replica is missing only form one of the broker it seems there is some
> issue
S");
> return new String(data);
> }
>
> @Override
> public String deserialize(String topic, Headers headers, byte[] data) {
> System.out.println("SERDE WITH HEADERS");
> return new String(data);
> }
>
> @Override
simply puts a consumer wrapper around KafkaConsumer. There
is no change in behaviour otherwise. I take it that you've debugged and
confirmed that it's not calling your overridden deserialize() with headers?
If so, can you link it here for everyone's benefit?
Thanks,
> On 2019/
property over the prop file.
I think you can try the following to get your implementation working
1) Provide the SerDe classes into classpath
2) Provide your consumer config file
3) Provide key/value Deserializer props via --consumer-property arg.
See how that works for you.
Thanks,
> Jorg
>
references to it in the
> documentation.
>
> Jorg
>
> On 2019/11/11 13:00:03, "M. Manna" wrote:
> > Hi,
> >
> >
> > On Mon, 11 Nov 2019 at 10:58, Jorg Heymans
> wrote:
> >
> > > Hi,
> > >
> > > I have created a cl
Hi,
On Mon, 11 Nov 2019 at 11:55, Sachin Kale wrote:
> Hi,
>
> We are working on a prototype where we write to two Kafka cluster
> (primary-secondary) and read from one of them (based on which one is
> primary) to increase the availability. There is a flag which is used to
> determine which clus
Hi,
On Mon, 11 Nov 2019 at 10:58, Jorg Heymans wrote:
> Hi,
>
> I have created a class implementing Deserializer, providing an
> implementation for
>
> public String deserialize(String topic, Headers headers, byte[] data)
>
> that does some conditional processing based on headers, and then call
Hi,
On Fri, 8 Nov 2019 at 17:19, Jose Manuel Vega Monroy <
jose.mon...@williamhill.com> wrote:
> Hi there,
>
>
>
> I have a question about message order and retries.
>
>
>
> After checking official documentation, and asking your feedback, we set
> this kafka client configuration in each producer:
, for reference
https://www.lightbend.com/blog/monitor-kafka-consumer-group-latency-with-kafka-lag-exporter
https://sematext.com/blog/kafka-consumer-lag-offsets-monitoring/
I hope this helps.
Regards,
>
> Thx
>
> -Ursprüngliche Nachricht-
> Von: M. Manna
> Gesendet: Fre
ld advise you increase
your number of partitions and spread the burst across. Just like any other
tool, Kafka requires certain level of configuration to achieve what you
want. I would recommend you increase your partitions and consumers to
spread the load.
Regards,
>
> -Ursprüngliche Nachric
Hi,
> On 7 Nov 2019, at 09:18, SenthilKumar K wrote:
>
> Hello Experts , We are observing issues in Partition(s) when the Kafka
> broker is down & the Partition Leader Broker ID set to -1.
>
> Kafka Version 2.2.0
> Total No Of Brokers: 24
> Total No Of Partitions: 48
> Replication Factor: 2
mmitted offset. Try that and see how it
goes.
>
> Regards
>
> -Ursprüngliche Nachricht-
> Von: M. Manna
> Gesendet: Donnerstag, 7. November 2019 23:35
> An: users@kafka.apache.org
> Betreff: Re: Consumer Lags and receive no records anymore
>
> Consuming not
at 22:03, Oliver Eckle wrote:
> Using kafka-consumer-groups.sh --bootstrap-server localhost:9092
> --describe -group my-app ..
> put the output within the logs .. also its pretty obvious, cause no data
> will flow anymore
>
> Regards
>
> -Ursprüngliche Nachri
Have you checked your Kafka consumer group status ? How did you determine
that your consumers are lagging ?
Thanks,
On Thu, 7 Nov 2019 at 20:55, Oliver Eckle wrote:
> Hi there,
>
>
>
> have pretty strange behaviour questioned here already:
> https://stackoverflow.com/q/58650416/7776688
>
>
>
>
How about High watermark check ?
Since consumers consume based on HWM, presence of the same HWM should be a
good checkpoint, no?
Regards,
On Mon, 4 Nov 2019 at 22:53, Guillaume Arnaud wrote:
> Hi,
>
> I would like to compare the messages of an original topic with a mirrored
> topic in an other
Hi,
Per test is based on a set of tuning parameters e.g. batch.size, axes,
partitions, network latency etc. Your transactions are failing because your
batch has expired, (or at least, that’s what it shows on the log). You have
to tune your request timeout and batch.size correctly to improve on the
Hi,
not sure what it means "Tries to communicate with itself". Are you talking
about local network loopback?
Also, have you tried ssl debug using openssl? What did you observe?
The exception is handshake exception. This is quite common when your cert
validation fails. How have you setup your sig
You should also check out Becket Qin’s presentation on producer performance
tuning on YouTube. Both these items should give you all positives and
negatives of having many/less portions.
Thanks,
On Sat, 26 Oct 2019 at 09:19, Manasvi Gupta wrote:
>
> https://www.confluent.io/blog/how-choose-numbe
wrote:
> Can you point me to the link where I have to check?
>
> On Thu 24 Oct, 2019, 7:54 PM M. Manna, wrote:
>
> > Have you checked the Kafka build 2.3.1 RC2 which everyone is currently
> > voting for ? It’s worth checking for your question...
> >
> > Regar
Have you checked the Kafka build 2.3.1 RC2 which everyone is currently
voting for ? It’s worth checking for your question...
Regards.
On Thu, 24 Oct 2019 at 13:31, Debraj Manna wrote:
> Hi
>
> Does Kafka work with OpenJDK 11? I have seen the below issue which is
> resolved in 2.1.
>
> https://is
ck up.
>
> Currently we have a large number of under replicated partitions as well as
> occurrences of broker failures.
>
> Thank you for the help!
>
> On Sun, Oct 20, 2019 at 5:20 PM M. Manna wrote:
>
> > It looks like the issue is fixed in later releases. And you’r
Everything has impact. You cannot keep churning loads of messages under the
same operating condition, and expect nothing to change.
You have know find out (via load testing) an optimum operating condition
(e.g. partition, batch.size etc.) for you producer/consumer to work
correctly. Remember that
01:13, M. Manna wrote:
> Hello,
>
> I have recently had some message loss for a consumer group under kafka
> 2.3.0.
>
> The client I am using is still in 2.2.0. Here is how the problem can be
> reproduced,
>
> 1) The messages were sent to 4 consumer groups, 3 of them were
Hi All,
https://github.com/SourceLabOrg/kafka-webview
Not sure if anyone has come across this. Very nice tool indeed and has
Spring boot baseline.
another one is - Kafka Magic viewer http://www.kafkamagic.com/
Are they worth covering in cwiki?
Thanks,
Hello,
I have recently had some message loss for a consumer group under kafka
2.3.0.
The client I am using is still in 2.2.0. Here is how the problem can be
reproduced,
1) The messages were sent to 4 consumer groups, 3 of them were live and 1
was down
2) When the consumer group came back online,
It looks like the issue is fixed in later releases. And you’re running a
very old Kafka version TBF.
Would an upgrade help ? If not, if you’ve got replication enabled (RF >=3)
you could try deleting broker files and recreating them by restarting the
affected broker.
Thanks,
Thanks,
On Sun, 20 O
In addition to what Peter said, I would recommend that you stop and delete
all data logs (if your replication factor is set correctly). Upon restart,
they’ll be recreated. This is of course the last time thing to do if you
cannot determine the root cause.
The measure works well for me with my k8s
Please check your advertised.listeners and listeners config.
Thanks,
On Thu, 17 Oct 2019 at 22:13, Wang, Shuo wrote:
> Hi,
>
> I have a question regarding connecting to kafka broker from docker.
>
> I have zookeeper and kafka broker running on my local machine.
> I have a docker container runni
Hello Peter,
have you tried setting a higher value for connection timeout ?
I am running 2.3.0 with 30s for zk sessions and 90s for zk connection.
I haven’t checked 2.3.1 yet, looks like you may have found something worth
checking before upgrading.
Regards,
On Tue, 8 Oct 2019 at 21:41, Peter
If I get your question right, your concern isn't about auto.offset.reset -
it's the partition assignment.
consumer group represents parallelism. It's well-documented in Kafka
official docs. Each consumer (in a consumer group) gets fare share of job
(i.e. # partitions for a topic subscription). Due
influence to have this kind of symptoms?
>
> Also the JMX-Metrics of Kafka didn't report any under-replicated
> partitions... But when running the kafka-topics.sh with
> --under-replicated-partitions it showed the ones from this topic.
>
>
> On 01-Oct-19 10:58 PM, M. Manna wrote:
&g
pic Partition: 3Leader: 1 Replicas:
> >> 2,1,3 Isr: 1
> >> Topic: my_topic Partition: 4Leader: 1 Replicas:
> >> 3,2,1 Isr: 1
> >> Topic: my_topic Partition: 5Leader: 1 Replicas:
> >> 1,3,2 Isr: 1
> >> Topic: my_
1.0.1 is very old version of Kafka. Is there any chance you would consider
a rolling upgrade ?
Regards,
On Sun, 29 Sep 2019 at 22:43, Jamie wrote:
> Have you got any producers using an older version of Kafka? Does the
> broker with high CPU usage contain the leader of any topics which don't
> h
1 - 100 of 321 matches
Mail list logo