While Kafka Streams does not support monthly windows out-of-the box, it
is possible to define you own custom windows.
You can find an example that defines "daily windows", including timezone
support, on GitHub:
https://github.com/confluentinc/kafka-streams-examples/blob/5.3.1-post/src/test/java/io
It's going to be hard to find out which client it is. This is a known
issue in general and there is a KIP that address is:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
The root cause for the error you see seems to be, th
On Thu, Nov 21, 2019 at 4:25 PM Peter Bukowinski wrote:
> How many partitions are on each of your brokers? That’s a key factor
> affecting shutdown and startup time.
>
The test hosts run about 384 partitions each (7 topics * 128 partitions
each * 3x replication / 7 brokers). The largest prod clu
How many partitions are on each of your brokers? That’s a key factor affecting
shutdown and startup time. Even if it is large, though, I’ve seen a notable
reduction in shutdown and startup times as I’ve moved from kafka 0.11 to 1.x to
2.x.
I’m currently doing a rolling restart of a 150-broker c
I've been looking at upgrading my cluster from 1.1.0 to 2.3.1. While
testing, I've noticed that shutting brokers down seems to take consistently
longer on 2.3.1. Specifically, the process of 'creating snapshots' seems to
take several times longer than it did on 1.1.0. On a small testing setup,
the
Hi All,
All the docs that I was able to find describe the rolling upgrade.
But I didn't find any docs that describe how to perform no-rolling upgrade.
So if system can afford downtime how in this case to upgrade kafka form
version 0.10.0 to 2.3.x.
Is it still required to do few restarts or we can
Hi,
Maybe this library will be useful for someone:
https://github.com/svladykin/ReplicaMap
Future plans:
- Optimistic transactions: update multiple keys in a single TX
- Sharding: distribute the partitions across multiple clients
Sergi
A different approach would be to integrate the Apache DataSketches
(https://datasketches.apache.org/) which have mathematical proofs behind them.
Using a DataSketch you can capture unique members for any given time period in
a very small data object and be able to aggregate them (even though u
Hi Chintan,
You cannot specify time windows based on a calendar object like months.
In the following, I suppose the keys of your records are user IDs. You
could extract the months from the timestamps of the events and add
them to the key of your records. Then you can group the records by key
and
Hi,
We have a use case to capture number of unique users per month. We planned
to use windowing concept for this.
For example, group events from input topic by user name and later sub group
them based on time window. However i don't see how i can sub group the
results based on particular month, s
Hello,
I final update on this. I found that there is an open transaction causing the
LSO to be stuck at offset 10794778. Similar to this stackoverflow issue:
https://stackoverflow.com/questions/56643907/manually-close-old-kafka-transaction
Despite using the same pool of transactional IDs this o
Hi Experts,
I use Kafka 0.11.2
I have an issue where the Kafka logs are bombarded with the following error:
ERROR [KafkaApi-14733] Error when handling request
{replica_id=-1,max_wait_time=0,min_bytes=0,max_bytes=2147483647,topics=[{topic=my_topic,partitions=[{partition=22,fetch_offset=1297798,max
12 matches
Mail list logo