Hi Guozhang,
I want to get the latest offset, code as follows:
consumer.assign(topicPartitionList);
consumer.seekToEnd(topicPartitionList);
long offset = consumer.position(topicPartition);
I note that the topic is marked for deletion but "delete.topic.enable" is not
set to true.
Maybe it cause t
In addition, our soon-to-be-released JDBC sink connector uses the
Connect framework to do things that are kind of annoying to do
yourself:
* Convert data types
* create tables if needed, add columns to tables if needed based on
the data in Kafka
* support for both insert and upsert
* configurable b
Hi Mathieu,
I'm cc'ing Ewen for answering your question as well, but here are my two
cents:
1. One benefit of piping end result from KS to KC rather than using
.foreach() in KS directly is that you can have a loose coupling between
data processing and data copying. For example, for the latter app
Nicolas,
The reason we do not yet have a windowed KStream-KTable join is that we do
not materialize the KStream but only the KTable according to our first join
semantics design just to keep the scope small. But it is not impossible to
add if we feel it is a common use case that non-windowed KStrea
Hi,
Kafka Stream is under heavy development, including better documentation
etc. Maybe the docs do not explain all internals yet. We also work in
improving the wiki documentation
(https://cwiki.apache.org/confluence/display/KAFKA/Index) -- the wiki
covers more of the internal design (some stuff mi
Hey Matthias,
Thanks for the quick reply! Unfortunately that's still a very unsatisfying
answer and I'm hoping you or someone else can shed a bit more light on the
internals here. First of all, I've read through the documentation (some parts
closer than others, so 100% possible I flat out misse
I'm trying to track down an issue with one of our consumers. There are 4
threads (in 4 separate processes) in the same consumer group, which will
run happily for a few hours before inevitably one of them crashes with the
following exception:
org.apache.kafka.clients.consumer.CommitFailedException:
Hello Yuanjia,
Could you share your code example on calling consumer.position()? Is the
partition that you are getting the offset from assigned to the consumer?
Guozhang
On Wed, Jul 20, 2016 at 11:50 PM, yuanjia8...@163.com
wrote:
> Hi,
> With kafka-clients-0.10.0.0, I use KafkaConsumer.
Kafka Users,
Are there any settings that limits the number of producers per topic per broker
..? I am experimenting with a single broker with around 500 producers and this
works fine .. but increasing this further to 600 , 700 producers and suddenly
the kafka broker stops functioning .. CPU uti
Hi,
you answered your question absolutely correctly by yourself (ie,
internal topic creating in groupBy() to repartition the data on words).
I cannot add more to explain how it works.
You might want to have a look here for more details about Kafka Streams
in general:
http://docs.confluent.io/3.0
Hello Kafka Users,
(I apologize if this landed in your inbox twice, I sent it yesterday afternoon
but it never showed up in the archive so I'm sending again just in case.)
I've been floating this question around the #apache-kafka IRC channel on
Freenode for the last week or two and I still have
Hello again, Kafka users,
My end goal is to get stream-processed data into a PostgreSQL database.
I really like the architecture that Kafka Streams takes; it's "just" a
library, I can build a normal Java application around it and deal with
configuration and orchestration myself. To persist my da
Hi there,
It's enabled with the config log.cleaner.enable
Thanks
On Wed, Jul 20, 2016 at 5:29 PM, Anderson Goulart <
anderson.goul...@boxever.com> wrote:
> Hi,
>
> How can I see if log compaction is enabled? And how can I enable it? I
> didn't find it on kafka docs.
>
>
> Thanks, Anderson
>
>
>
Hi All,
Adding this setting does the trick it seems:
--offset.commit.interval.ms 5000
This defaults to 60,000.
Not sure if this has any adverse affects by lowering it to 5 seconds.
Cheers!
On Thu, Jul 21, 2016 at 10:50 AM, cs user wrote:
> Hi All,
>
> I've recently enabled ssl for our clus
Hi All,
I've recently enabled ssl for our cluster, and as we are using a mirror
maker I'm now starting our mirror maker processes with the --new.consumer
switch. I can see that now the mirror maker consumer group type has
switched from ZK to KF.
However I've started to notice a delay when sending
Hi Guozhang,
"I think ideally what you want is a windowed KStream-KTable join" <- It is
what I need indeed. But from what I gather in Kafka/Confluent
documentation, when using a KStream-KTable, you don't have the window
parameter. @Matthias said it by design it seems, maybe there's a good
reason f
Hi Folks,
Was searching for some command to list all connected(live) producers to
Kafka brokers.
Is there any such tool to get the live producers and consumers?
Regards,
Umesh
17 matches
Mail list logo