Hi All,
Any advice on how this could be achieved.
Thanks
On Mon, Sep 9, 2019 at 11:08 PM Navneeth Krishnan
wrote:
> Hi All,
>
> I have streaming ETL job which takes data from one kafka topic, enriches
> and writes it to another topic. I want to read one more topic which has the
> same key and
In my streaming topology, I am using the suppress dsl operator. As per the
documentation, it is supposed to output the final results after the window
closes. But I noticed it's not emitting anything at all. Here is the pseudo
code of my topology.
.filter((key, value) -> ...)
.flatMap((key, val
Check out instructions in the wiki:
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
If you want to subscribe to mailing lists, it's self service. As you
want to contribute code, you should subscribe to the dev mailing list.
https://kafka.apache.org/contact
-Matthias
Hello Ning,
I've added you to the contributors list.
Cheers,
Guozhang
On Tue, Sep 10, 2019 at 10:27 AM Ning Liu wrote:
> Sorry, I forget to add my profile here
>
> Account signup
>
> You have signed up for a Jira account at:
> https://issues.apache.org/jira
> If you forget your password, you c
Hi there,
This is Ning Liu from Microsoft Azure Compute, I plan to contribute to
Kafka KIP 280. Could you please add me to user list?
Also, I am wondering if I am in the user list, do I automatically get write
access to Kafka Opensource repo. Right now, I want to create a branch for
KIP 280 but d
Sorry, I forget to add my profile here
Account signup
You have signed up for a Jira account at:
https://issues.apache.org/jira
If you forget your password, you can request another via the link below.
https://issues.apache.org/jira/secure/ForgotLoginDetails.jspa?username=Starry
Here are the detail
@Alex:
Yes, if the table state is small and static, using a GlobalKTable is a
good fit. However, I would assume that most use cases require event time
base join semantics (otherwise your application is non-deterministic)
and that the table state is large and must be sharded. Hence, I would
expect
Hi all,
I think I found the root cause of the problem. In the code I’ve provided I’m
feeding both input topics with data without keys. I’m selecting keys based on
the message contents actually in a step that is a part of declared topology.
Having that, even if I make sure that input topics are p
Hello!
We are running Kafka 2.1.1. Yesterday we accidentally corrupted dns record for
our controller broker. As a result, broker was not visible from outside, but
was able to connect to zookeeper cluster (zookeepers were on remote servers)
and other Kafka Brokers (also on remote servers) from