Hi,
Recently we enabled Kafka quota management for our Kafka clusters. We are
looking for Kafka metrics that can be used for alerting on whether a Kafka
broker throttles requests based on quota.
There are a few throttle related metrics on Kafka. But none of them can
tell accurately whether the
Hello,
Does anyone know where to find a good documentation to iconnect kafka to
qradarapart from ibm's one ?
Also a compatibility document to know which kafka version for which
qradar version ?
Thank you
Olivier
Hi,
recently our team had try to upgrade kafka 0.10.0.1 to 2.2.0.we do so
by upgrade guide http://kafka.apache.org/documentation.html#upgrade, after
we set all brokers inter.broker.protocol.version to 2.2.0,we found that
producer produce rate become very slow and many messages take more than
It's done. Sorry for the confusion.
The KIP table however, showed that it's been accepted. But yes it's better
to keep all places consistent.
Thanks,
On Fri, 21 Jun 2019 at 03:27, Jeff Widman wrote:
> The KIP linked to from the JIRA shows the KIP as still under discussion...
> if it's been
The KIP linked to from the JIRA shows the KIP as still under discussion...
if it's been voted/approved, then can you please update the wiki page?
On Thu, Jun 20, 2019 at 6:45 PM M. Manna wrote:
> Hello,
>
> We’ve been waiting for this PR for a while.
>
>
Hello,
We’ve been waiting for this PR for a while.
https://github.com/apache/kafka/pull/6771
Could this Be reviewed for new release ? This is important for our project.
Thanks,
The observed behavior is expected.
> For example, if we send 2615 events to an empty topic, we expect the end of
>> the topic to be offset 2616.
This is a wrong expectation. Even if Kafka behaves that way for non-EOS,
there is no "contract" that guarantees that offsets are consecutive.
Kafka
> The two streams were read in separately:
> instead of together:
If you want to join two streams, reading both topic separately sound
correct.
> There are twenty partitions per topic. It seems as if it is not reading
> equally fast from all topic partitions.
This should not affect the
Hi,
We are using Kafka Streams 2.2.1 and Kafka 2.2.0, and we noticed that the
end offset number is larger than the numbers of events sent to a topic if
we set *processing guarantee* as *exactly once* in a Kafka Streams app.
For example, if we send 2615 events to an empty topic, we expect the end
I am new to using Kafka and wondering if any of experts out there can let me
know how i can run a standalone Network Unit test for Kafka Server. I see this
class SocketServerTest in Unit tests for Networking but not able to figure out
the commands to use it or i need to write any test program
I am new to using Kafka and wondering if any of experts out there can let me
know how i can run a standalone Network Unit test for Kafka Server. I see this
class SocketServerTest in Unit tests for Networking but not able to figure out
the commands to use it or i need to write any test program
Could you tell me a little more about the delays about the record caches and
how I can disable it ?
If I could summarize my problem:
-A new record with a new timestamp > all records sent before, I expect *all* of
the old windows to close
-Expiry of the windows depends only on the event time
Hi!
You might also want to set MAX_TASK_IDLE_MS_CONFIG =
"max.task.idle.ms" to a non-zero value. This will instruct Streams to
wait the configured amount of time to buffer incoming events on all
topics before choosing any records to process. In turn, this should
cause records to be processed in
Hi!
In addition to setting the grace period to zero (or some small
number), you should also consider the delays introduced by record
caches upstream of the suppression. If you're closely watching the
timing of records going into and coming out of the topology, this
might also spoil your
On 2019/06/04 12:58:10, "M. Manna" wrote:
> Kafka cannot be run on Windows in production. There are problems with
> memory map allocation/releases which results into fatal shutdown. On Linux
> it’s allowed, but on Windows it’s prevented.
>
> You can reproduce this by setting a small log
It seems like there were multiple issues:
1. The two streams were read in separately:
val stream1: KStream[String, String] = builder.stream[String,
String](Set("topic1"))
val stream2: KStream[String, String] = builder.stream[String,
String](Set("topic2"))
instead of together:
val
For reference, the cause of this turned out to be a corrupt timeindex file
on the earliest segment in the partition. Although Kafka didn't flag the
files as being corrupt, they clearly weren't correct as they had a filesize
of 12bytes instead of several MB. It was fixed by stopping Kafka, removing
You first mention that you start 4 consumers, and later that only 4 out
of 8 consumer read data. This is a little confusion.
About addition partitions and the quote from the docs: The quote is
about broker behavior. When you create a topic, for each topic partition
replica, a broker is selected
18 matches
Mail list logo