Thanks!
On Mon, Mar 15, 2021 at 8:38 PM Sophie Blee-Goldman
wrote:
> Yep, that fell off my radar. Here we go:
> https://issues.apache.org/jira/browse/KAFKA-12475
>
> On Mon, Mar 15, 2021 at 8:09 PM Guozhang Wang wrote:
>
> > Hey Sophie,
> >
> > Maybe we can first create a JIRA ticket for this?
I believe I already answered your question in another channel, but just to
follow up in this thread in case anyone else is interested in the answer:
You can override the *ConsumerPartitionAssignor.onAssignment(Assignment,
ConsumerGroupMetadata)* method to get an update on the currently
assigned p
Hey Mazen,
The easiest way to approach this is probably to pass in a reference to the
associated Consumer and
then just call one of the *Consumer#committed *methods which return the
OffsetAndMetadata.
But I'm guessing your underling question may be about how to get that
reference to the Consumer
Hi,
I am trying to find the best pattern to solve a specific problem using
Kafka streaming. All of our current processing uses the Kafka streaming
API (using multiple joins, windows, repartitions etc) so I already think I
have a decent grasp of the fundamentals.
We have 2 streams of events:
- pr
Thanks Luke. I'll make sure to bump this in the kafka-site repo once the
release is finalized.
On Mon, Mar 15, 2021 at 8:58 PM Luke Chen wrote:
> Hi Sophie,
> A small doc update for 2.6.2. I think we missed it.
> https://github.com/apache/kafka/pull/10328
>
> Thanks.
> Luke
>
> On Tue, Mar 16, 2
Hi Sophie,
A small doc update for 2.6.2. I think we missed it.
https://github.com/apache/kafka/pull/10328
Thanks.
Luke
On Tue, Mar 16, 2021 at 11:12 AM Guozhang Wang wrote:
> Hi Sophie,
>
> I've reviewed the javadocs / release notes / documentations, and they LGTM.
>
> +1.
>
>
> Guozhang
>
> On
Yep, that fell off my radar. Here we go:
https://issues.apache.org/jira/browse/KAFKA-12475
On Mon, Mar 15, 2021 at 8:09 PM Guozhang Wang wrote:
> Hey Sophie,
>
> Maybe we can first create a JIRA ticket for this?
>
> On Mon, Mar 15, 2021 at 3:09 PM Sophie Blee-Goldman
> wrote:
>
> > Sounds good
Hi Sophie,
I've reviewed the javadocs / release notes / documentations, and they LGTM.
+1.
Guozhang
On Fri, Mar 12, 2021 at 10:48 AM Sophie Blee-Goldman
wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 2.6.2.
>
> Apache
Hey Sophie,
Maybe we can first create a JIRA ticket for this?
On Mon, Mar 15, 2021 at 3:09 PM Sophie Blee-Goldman
wrote:
> Sounds good! I meant anyone who is interested :)
>
> Let me know if you have any questions after digging in to this
>
> On Mon, Mar 15, 2021 at 2:39 PM Alex Craig wrote:
>
Hi,
We have a requirement to calculate metrics on a huge number of keys (could
be hundreds of millions, perhaps billions of keys - attempting caching on
individual keys in many cases will have almost a 0% cache hit rate). Is
Kafka Streams with RocksDB and compacting topics the right tool for a tas
Sounds good! I meant anyone who is interested :)
Let me know if you have any questions after digging in to this
On Mon, Mar 15, 2021 at 2:39 PM Alex Craig wrote:
> Hey Sophie, not sure if you meant me or not but I'd be happy to take a
> stab at creating a KIP for this. I want to spend some ti
I am running MirrorMaker 2 (Kafka 2.7), trying to migrate all topics from
one cluster to another while preserving through
`sync.group.offsets.enabled=true`. My source cluster is running Kafka 0.10,
while the target cluster is running 2.6.1.
While I can see data being replicated, the data on the re
Hey Sophie, not sure if you meant me or not but I'd be happy to take a
stab at creating a KIP for this. I want to spend some time digging into
more of how this works first, but will then try to gather my thoughts and
get something created.
Alex
On Mon, Mar 15, 2021 at 1:48 PM Sophie Blee-Goldma
This certainly does seem like a flaw in the Streams API, although of course
Streams is just
in general not designed for use with remote anything (API calls, stores,
etc)
That said, I don't see any reason why we *couldn't* have better support for
remote state stores.
Note that there's currently no
Hi,
This may be a newbie question but is it possible to control the
partitioning of a RocksDB KeyValueStore in Kafka Streams?
For example, I perhaps only want to partition based on a prefix of a key
rather than the full key. I assume something similar must be done for the
WindowStore to partition
Hey Mazen,
There's not necessarily one rebalance per new consumer, in theory if all
100 consumers are started up at the same time then there
may be just a single rebalance. It really depends on the timing -- for
example in the log snippet you provided, you can see that the
first member joined at 1
Congratulations, Tom!
-Bill
On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
wrote:
> Congrats, Tom!
>
> Best,
> Bruno
>
> On 15.03.21 18:59, Mickael Maison wrote:
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Tom Bentley as a committer, and
> > we are excited to announce that he accept
Congrats, Tom!
Best,
Bruno
On 15.03.21 18:59, Mickael Maison wrote:
Hi all,
The PMC for Apache Kafka has invited Tom Bentley as a committer, and
we are excited to announce that he accepted!
Tom first contributed to Apache Kafka in June 2017 and has been
actively contributing since February 20
Congratulations Tom!
--
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
On Mon, 15 Mar 2021 at 18:00, Mickael Maison wrote:
> Hi all,
>
> The PMC for Apache Kafka has invited Tom Bentley as a committer, and
> we are excited to announce that he accepted!
>
> Tom first
Hi all,
The PMC for Apache Kafka has invited Tom Bentley as a committer, and
we are excited to announce that he accepted!
Tom first contributed to Apache Kafka in June 2017 and has been
actively contributing since February 2020.
He has accumulated 52 commits and worked on a number of KIPs. Here a
Hi Bruno
Yes, cleaning up before upgrading is probably what I'm gonna do.
I was just trying to understand what's going on, as this shouldn't be
required.
Thanks for your help
Murilo
On Mon, 15 Mar 2021 at 11:16, Bruno Cadonna
wrote:
> Hi Murilo,
>
> OK, now I see why you do not get an error in
Hi Murilo,
OK, now I see why you do not get an error in the second case in your
small environment where you cleaned up before upgrading. You would
restore from the earliest offset anyway and that is defined by the
earliest offset at the broker and that always exists. Hence, no out of
range ex
Hi, a newbie error but I can't find how to fix this,
I updated the retention.ms topic config with a value shorted than what I
wanted, and marked more messages than what I would for deletion.
The files are still on the topic directory, but I can't find how can I tell
to kafka to move the "first va
Hi Alex,
I guess wiping out the state directory is easier code-wise, faster,
and/or at the time of development the developers did not design for
remote state stores. But I do actually not know the exact reason.
Off the top of my head, I do not know how to solve this for remote state
stores.
Hi Bruno
We have an environment variable that, when set, will call
KafkaStreams.cleanup() and sleep.
The changelog topic is an internal KafkaStreams topic, for which I'm not
changing any policies.
It should be some default policy for a KTable in my understanding.
Thanks
Murilo
On Mon, 15 Mar 202
Hi all,
I have a kafka consumer pod running on kubernetes, I executed the command
kubectl scale consumerName --replicas=2, and as shown in the logs below two
seperate rebalancing processes were trigerred, so if the number of consumer
replicas scaled = 100, one hundred seperate rebalancing ar
Hi Murilo,
A couple of questions:
1. What do you mean exactly with clean up?
2. Do you have acleanup policy specified on the changelog topics?
Best,
Bruno
On 15.03.21 15:03, Murilo Tavares wrote:
Hi Bruno
No, I haven't tested resetting the application before upgrading on my large
environment.
Bruno,
Thanks for the info! that makes sense. Of course now I have more
questions. :) Do you know why this is being done outside of the state
store API? I assume there are reasons why a "deleteAll()" type of function
wouldn't work, thereby allowing a state store to purge itself? And maybe
mor
Hi Bruno
No, I haven't tested resetting the application before upgrading on my large
environment. But I was able to reproduce it in my dev environment, which is
way smaller.
This is what I did:
- Clean up and downgrade to 2.4.
- Let it catch up;
- upgrade to 2.7; Same errors, but it caught up after
Bruno,
i tried to explain this in 'kafka user's language through above mentioned
scenario, hope i put it properly -:) and correct me if i am wrong
On Mon, Mar 15, 2021 at 7:23 PM Pushkar Deole wrote:
> This is what I understand could be the issue with external state store:
>
> kafka stream appl
This is what I understand could be the issue with external state store:
kafka stream application consumes source topic, does processing, stores
state to kafka state store (this is backed by topic) and before producing
event on destination topic, the application fails with some issue. In
this case,
Hi Murilo,
Did you retry to upgrade again after you reset the application? Did it work?
Best,
Bruno
On 15.03.21 14:26, Murilo Tavares wrote:
Hi Bruno
Thanks for your response.
No, I did not reset the application prior to upgrading. That was simply
upgrading KafkaStreams from 2.4 to 2.7.
I was
Hi Alex,
You are right! There is no "exactly once magic" backed into the RocksDB
store code. The point is local vs remote. When a Kafka Streams client
closes dirty under EOS, the state (i.e., the content of the state store)
needs to be wiped out and to be re-created from scratch from the
chan
Hi Bruno
Thanks for your response.
No, I did not reset the application prior to upgrading. That was simply
upgrading KafkaStreams from 2.4 to 2.7.
I was able to reproduce it on a smaller environment, and it does indeed
recover.
In a large environment, though, it keeps like that for hours. In this
" Another issue with 3rd party state stores could be violation of
exactly-once guarantee provided by kafka streams in the event of a failure
of streams application instance"
I've heard this before but would love to know more about how a custom state
store would be at any greater risk than RocksDB
Hi,
We have an in-house cluster of Kafka brokers and ZooKeepers.
Kafka version kafka_2.12-2.7.0
ZooKeeper version apache-zookeeper-3.6.2-bin
The topic is published to Google BigQuery.
We would like to use a schema registry as opposed to sending schema with
payload for each message that add
Hi Murilo,
No, you do not need any special procedure to upgrade from 2.4 to 2.7.
What you see in the logs is not an error but a warning. It should not
block you on startup forever. The warning says that the local states of
task 7_17 are corrupted because the offset you want to fetch of the
st
This issue has clear positioning, increase the partition, the colleague's
code used in 0.10 AdminUtils, the replicas of the partitions of the
existing distribution redistribution, and directly write into the znode, to
the normal controller failed to deal with this logic, I run the kafka is
2.0.0 ve
38 matches
Mail list logo