Hi,
I encountered a problem and I don't know if it's a bug.
The description is as follows, kafka version 3.3.2 with kraft
1. one topic has two replica 0 and 1, 0 is leader and 1 is follower
2. at some point, the disk on the broker id 0 is error(Read-only file system),
but log not offline in th
Kafka itself includes Kafka Streams (
https://kafka.apache.org/31/documentation/streams/), so you can do this
processing in Kafka. There's a Filter transformation that would be a good
place to start:
https://kafka.apache.org/31/documentation/streams/developer-guide/dsl-api.html#stateless-transforma
Hi Emily,
Nope, Kafka doesn't have that capability built in, it's just a distributed
log that's great for streaming events. However, you can easily write a
program that consumes those events from Kafka and then does what you want
:)
Cheers,
Liam
On Tue, 3 May 2022 at 06:30, Emily Schepisi
wrot
Hello,
I have a question about Kafka. If I put an upper and lower control limit on
the data, and the log records an event where the upper or lower control
limit is breached, will Kafka be able to send a notification via email or
text message to the user?
Example: I'm tracking the daily temperatur
Hello Team,
I am using kafka_2.11-1.1.0 in my prod environment, just wanted to ask is
there any performance related issue or glitch can be possible on the leap
year day, Feb 29 2020 and the precaution we could take to avoid this issue?
Regards,
Iqbal
Hi Gagan,
If you want to read a message, you need to poll the message from the
broker. The brokers have only very limited notion of message content. They
only know that a message has a key, a value, and some metadata, but they
are not able to interpret the contents of those message components. The
Hi team,
Say we have a client which has pushed a message to a topic. The message has
a a simple structure
Task - Time of task
Send an email - 1530
Now say that this message is consumed by a consumer subscribed to this
topic.
Since Topic already has a storage, what I intend to do is just read the
lon...@outlook.com<mailto:lichenglon...@outlook.com>
主题: Re: [kafka question] leader of partitions is none
Hi,
I had similar problems where new topics were assigned none, or some cases -1
leader. In my case, was due to a very busy ZK cluster. Once I removed load in
the ZK cluster, I've rest
Hi,
I had similar problems where new topics were assigned none, or some cases
-1 leader. In my case, was due to a very busy ZK cluster. Once I removed
load in the ZK cluster, I've restarted (rolling restart) kafka brokers and
everything went back to normal. However I was worried about restarting
th
Hi,I need some help about kafka. Kafka is a very nice software , and we have
been using it for a long time. But now ,I encountered a problem. When I create
a topic on kafka server cluster, the leader of every partition is none . like
this:
[cid:image001.png@01D2FBF9.136859C0]
The command is :
That would be really useful. Thanks for your writing, Guozhang. I will give
it a shot and let you know.
On Tue, Apr 7, 2015 at 10:06 AM, Guozhang Wang wrote:
> Jack,
>
> Okay I see your point now. I was originally thinking that in each run, you
> 1) first create the topic, 2) start producing to
Jack,
Okay I see your point now. I was originally thinking that in each run, you
1) first create the topic, 2) start producing to the topic, 3) start
consuming from the topic, and then 4) delete the topic, stop producers /
consumers before complete, but it sounds like you actually only create the
How about the first run then? If we use "largest" as "auto.offset.reset"
value, what value will these consumers get? I assume it will point to the
latest position in the log. Is that true? Just you know, we can't have a
warm up run so that the later runs can use the committed offset by that run.
T
Did you turn on automatic offset committing? If yes then this issue should
not happen as later runs will just consume data from the last committed
offset.
Guozhang
On Mon, Apr 6, 2015 at 5:16 PM, Jack wrote:
> Hi Guozhang,
>
> When I switched to auto.offset.reset to smallest, it will work. Howe
Hi Guozhang,
When I switched to auto.offset.reset to smallest, it will work. However, it
will generate a lot of data and it will slow down the verification.
Thanks,
-Jack
On Mon, Apr 6, 2015 at 5:07 PM, Guozhang Wang wrote:
> Jack,
>
> Could you just change "auto.offset.reset" to smallest and
Jack,
Could you just change "auto.offset.reset" to smallest and see if this issue
goes away? It is not related to the producer end.
Guozhang
On Mon, Apr 6, 2015 at 4:14 PM, Jack wrote:
> Hi Guozhang,
>
> Thanks so much for replying, first of all.
>
> Here is the config we have:
>
> group.id ->
Hi Guozhang,
Thanks so much for replying, first of all.
Here is the config we have:
group.id -> 'some unique id'
zookeeper.connect -> 'zookeeper host'
auto.commit.enabled -> false
'auto.offset.reset' -> largest
consumer.timeout.ms -> -1
fetch.message.max.bytes -> 10M
So it seems like we need to
Jack,
Your theory is correct if your consumer config set auto.offset.reset to
latest and you do not have any committed offsets before. Could you list
your consumer configs and see if that is the case?
Guozhang
On Mon, Apr 6, 2015 at 3:15 PM, Jack wrote:
> Hi folks,
>
> I have a quick question.
Hi folks,
I have a quick question.
We are using 0.8.1 and running into this weird problem. We are using
HighLevelConsumer for this topic. We created 64 partitions for this
message.
In our service, we first create a Consumer object as usual, and then we
went ahead, calls 'createMessageStreans' wi
Gwen,
Thanks.
1. I had a feeling about zookeeper being the potential bottleneck, but I wasn't
sure.
2. Good to know.
From: Gwen Shapira [gshap...@cloudera.com]
Sent: Monday, November 24, 2014 2:47 PM
To: users@kafka.apache.org
Subject: Re: Two
Hi Casey,
1. There's some limit based on size of zookeeper nodes, not sure exactly
where it is though. We've seen 30 node clusters running in production.
2. For your scenario to work, the new broker will need to have the same
broker id as the old one - or you'll need to manually re-assign partiti
Hello,
First, is there a limit to how many Kafka brokers you can have?
Second, if a Kafka broker node fails and I start a new broker on a new node, is
it correct to assume that the cluster will copy data to that node to satisfy
the replication factor specified for a given topic? In other words
Millions of messages per day (with each message being few bytes) is not
really 'Big Data'. Kafka has been tested for a million message per second.
The answer to all your question IMO is "It depends".
You can start with a single instance (Single machine installation). Let
your producer send messag
Hi,
I am planning to use Apache Kafka 0.8 to handle millions of messages per day.
Now I need to form the environment, like
(i) How many Topics to be created?
(ii) How many partitions/replications to be created?
(iii) How many Brokers to be created?
(iv) How many consumer instances in consum
24 matches
Mail list logo