Hard to achieve.
I guess a naive approach would be to use a `flatMapTransform()` to
implement a filter that drops all record that are not in the desired
time range.
pause() and resume() are not available in Kafka Streams, but only on the
KafkaConsumer (The Spring docs you cite is also about
We are upgrading our Spring Boot applications to Spring Boot 2.6.1. Spring
Boot 2.6.1 upgrade our Kafka dependency from 2.7.1 to 3.0.0.
After upgrading I'm getting a AccessDeniedException on all my tests using
@EmbeddedKafka.
Caused by: java.nio.file.AccessDeniedException:
We share topics between different tenants. Would it be possible to
implement a filtering on kafka
side that allows a consumer to filter a topic for a certain key?
The idea is that this consumer only gets messages with the specified key to
save network bandwidth as well as (possibly) disk io on
Hi Luke,
thanks for the hints. This helps a lot already.
We already use assign as we manage offsets on the consumer side. Currently
we only have one partition and simply assign a stored offset on partition 0.
For multiple partitions is it the correct behaviour to simply assign to
partition
Dear all,
Kafka supports adding new partition to a topic dynamically. So suppose that
initially I have a topic T with two partitions P0, P1 and a key space of three
keys K0, K1, K2. Suppose further that I am using some kind of hash partitioner
modulo 2 (number of partitions) at the producer
Hi Christian,
Answering your question below:
> Let's assume we just have one topic with 10 partitions for simplicity.
We can now use the environment id as a key for the messages to make sure
the messages of each environment arrive in order while sharing the load on
the partitions.
> Now we want
We have a single tenant application that we deploy to a kubernetes cluster
in many instances.
Every customer has several environments of the application. Each
application lives in a separate namespace and should be isolated from other
applications.
We plan to use kafka to communicate inside an