+1. Sounds like what you want is a compacted topic. Check out this section
of the docs: https://kafka.apache.org/documentation#compaction
Keep in mind that compaction isn't free. Like a garbage collector, it comes
with some overhead and some knobs for tuning.
On Wed, Oct 19, 2016 at 2:46 PM, Rado
Hi,
Anybody could help to answer below question? Thanks!
Best Regards
Johnny
From: ZHU Hua B
Sent: 2016年10月19日 16:22
To: 'users@kafka.apache.org'
Subject: Mirror multi-embedded consumer's configuration
Hi,
I launch Kafka mirror maker with multi-embedded consumer's configuration but
fa
Additionally, on consumer I observe a strange behavior: it is being
constantly rebalancing. There are no errors and each rebalance succeeds,
but as soon as one is finished the next one is started.
On Wed, Oct 19, 2016 at 4:36 PM, Timur Fayruzov
wrote:
> Hello,
>
> I run Kafka 0.8.2.2 cluster wit
Hi ,
I am using Kafka 0.9 broker and 0.8 Consumer. Consumer was running fine since
long time but today, I am getting the below exception.
[ConsumerFetcherThread--5], Error for partition
[rtdp.bizlogging.iditextlogdata,93] to broker 5:class
kafka.common.NotLeaderForPartitionException
If I t
Hello,
I run Kafka 0.8.2.2 cluster with 3 nodes and recently started to observe
strange behavior on select topics. The cluster runs in-house as well as
most consumers. I have started some consumers in AWS and they _mostly_ work
fine. Occasionally, I end up in a state where when I run
kafka-consume
This usually means that the consumer is being kicked out of the group due
to heartbeat failure detection protocol.
Try to increase your session.timeout.ms config so that the heartbeating is
less sensitive, and see if this issue happens less frequently.
Guozhang
On Wed, Oct 19, 2016 at 12:59 AM
Thanks Michael
Hopefully the upgrade story evolves as 0.10.1+ advances to maturity
Just my 2 cents
Decoupling the kafka streams from the core kafka changes will help so that
the broker can be upgraded without notice and streaming apps can evolve to
newer streaming features on their own pace
Rega
You can try cleanup.policy=compact.
But be careful with a large number of keys.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On October 19, 2016 at 11:44:39 PM, Jesus Cabrera Reveles (
jesus.cabr...@encontrack.com) wrote:
Hello,
We are a company of IoT and we are trying to implement k
Hello,
We are a company of IoT and we are trying to implement kafka but we have some
questions.
We need a topic with a certain number of partitions, each message has its own
key, relative to id device. And we need the topic hold only the last message,
we don't need historic of messages with
Sorry, had a typo in my gist. Here is the correct location:
https://gist.github.com/anduill/710bb0619a80019016ac85bb5c060440
On 10/19/16, 4:33 PM, "David Garcia" wrote:
Hello everyone. I’m having a hell of a time figuring out a Kafka
performance issue in AWS. Any help is greatly apprecia
Hello everyone. I’m having a hell of a time figuring out a Kafka performance
issue in AWS. Any help is greatly appreciated!
Here is our AWS configuration:
- Zookeeper Cluster (3.4.6): 3-nodes on m4.xlarges (default
configuration) EBS volumes (sd1)
- Kafka Cluster (0.10.0):
Yeah, I did think to use that method, but as you said, it writes to a dummy
output topic, which means I'd have to put in magic code just for the tests
to pass (the actual code writes to cassandra and not to a dummy topic).
On Thu, Oct 20, 2016 at 2:00 AM, Tauzell, Dave wrote:
> For similar queu
For similar queue related tests we put the check in a loop. Check every second
until either the result is found or a timeout happens.
-Dave
-Original Message-
From: Ali Akhtar [mailto:ali.rac...@gmail.com]
Sent: Wednesday, October 19, 2016 3:38 PM
To: users@kafka.apache.org
Subject: Ho
Wanted to add that there is nothing too special about these utility functions,
they are built using a normal consumer.
Eno
> On 19 Oct 2016, at 21:59, Eno Thereska wrote:
>
> Hi Ali,
>
> Any chance you could recycle some of the code we have in
> streams/src/test/java/.../streams/integration/
Hi Ali,
Any chance you could recycle some of the code we have in
streams/src/test/java/.../streams/integration/utils? (I know we don't have it
easily accessible in Maven, for now perhaps you could copy to your directory?)
For example there is a method there
"IntegrationTestUtils.waitUntilMinVa
Please change that.
On Thu, Oct 20, 2016 at 1:53 AM, Eno Thereska
wrote:
> I'm afraid we haven't released this as a maven artefact yet :(
>
> Eno
>
> > On 18 Oct 2016, at 13:22, Ali Akhtar wrote:
> >
> > Is there a maven artifact that can be used to create instances
> > of EmbeddedSingleNodeKaf
I'm afraid we haven't released this as a maven artefact yet :(
Eno
> On 18 Oct 2016, at 13:22, Ali Akhtar wrote:
>
> Is there a maven artifact that can be used to create instances
> of EmbeddedSingleNodeKafkaCluster for unit / integration tests?
I'm using Kafka Streams, and I'm attempting to write integration tests for
a stream processor.
The processor listens to a topic, processes incoming messages, and writes
some data to Cassandra tables.
I'm attempting to write a test which produces some test data, and then
checks whether or not the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
About auto-topic creation: If your broker configuration allows for
this, yes it would work. However, keep in mind, that the topic will be
created with default values (according to broker config) with regard
to number of partitions and replication fac
Dear Apache Enthusiast,
ApacheCon Sevilla is now less than a month out, and we need your help
getting the word out. Please tell your colleagues, your friends, and
members of related technical communities, about this event. Rates go up
November 3rd, so register today!
ApacheCon, and Apache Big Dat
Apps built with Kafka Streams 0.10.1 only work against Kafka clusters
running 0.10.1+. This explains your error message above.
Unfortunately, Kafka's current upgrade story means you need to upgrade your
cluster in this situation. Moving forward, we're planning to improve the
upgrade/compatibilit
I could successfully get the total (not average). As far as I see, there is
no need to create a topic manually before I run the app. Topic is created
if there is data and topic name not exists. Here is my code:
KStreamBuilder builder = new KStreamBuilder();
KStream longs = builder.stream(
+1 from myself too.
The vote passes with 9 +1 votes and no 0 or -1 votes.
+1 votes
PMC Members:
* Gwen Shapira
* Jun Rao
* Neha Narkhede
Committers:
* Ismael Juma
* Jason Gustafson
Community:
* Eno Thereska
* Manikumar Reddy
* Dana Powers
* Magnus Edenhill
0 votes
* No votes
-1 votes
* No vot
+1 (non-binding) passes librdkafka test suites
2016-10-19 15:55 GMT+02:00 Ismael Juma :
> +1 (non-binding).
>
> Verified source and Scala 2.11 binary artifacts, ran ./gradlew test with
> JDK 7u80, quick start on source artifact and Scala 2.11 binary artifacts.
>
> Thanks for managing the release!
Do you have only one partition in the topic? The way Kafka works is that all
messages are first distributed into partitions in the topic and then the
consumers are distributed among them and they read them sequentially.
If you have only one partition in the topic, all your messages will be in it
Hi guys,
i am using apache kafka with phprd kafka, i want to know how can i use
multiple Kafka consumers on same partition from different groups to consume
message parallel, say if consumer are c1,c2,c3 consuming single partition
0,
than if c1 is consuming from 0 offset than c2 should start from 1
Hi.
I’ve run into a problem with Kafka consumers not leaving their consumer
group cleanly when the application restarts.
Out of 9 topics that (with a consumer for each) it seems that every time at
least 2 or 3 do not leave the group cleanly so when the application starts
these consumers start c
+1 (non-binding).
Verified source and Scala 2.11 binary artifacts, ran ./gradlew test with
JDK 7u80, quick start on source artifact and Scala 2.11 binary artifacts.
Thanks for managing the release!
Ismael
On Sat, Oct 15, 2016 at 12:29 AM, Jason Gustafson
wrote:
> Hello Kafka users, developers
Hi,
I am running ZK from version 0.10.0.1 on 3 machines, the hostnames haven't
been registered on DNS so all defined in /etc/hosts, there are no firewall
between them. I saw many posts online pointed to some bugs in the
zookeepers and there are many patches to fix it. Has the zookeeper in
0.10.0.1
Hi
Initially we are using 8.2 Kafka client and server and things are working
fines. Now Kafka server is upgraded to 10.0 and we getting an issue with
offset commit.
*Now Kafka setup looks like:*
Kafka client : 8.2 version
Kafka server: 10.2 version
we are using manual offset commit and we are sto
Hi,
I launch Kafka mirror maker with multi-embedded consumer's configuration but
failed as below, what's the mean of "you asked for only one", is there an
option control it? Thanks!
# bin/kafka-mirror-maker.sh --consumer.config config/consumer-1.properties
--consumer.config config/consumer-2.
Hi Ian,
Thanks for the detailed explanation.
Thanks and Regards
A.SathishKumar
> Hi
> The client initially connects to the first Broker specified in
> bootstrap.servers (if that’s
> not available, it’ll connect to the next one on the list, and so on). When it
> does so, it
> is then given in
Hi,
I have a consumer which implements new consumer api (0.9.0.1). I see below
errors quite frequently in the consumer application logs:
ERROR [pool-4-thread-5] - o.a.k.c.c.i.ConsumerCoordinator - Error
UNKNOWN_MEMBER_ID occurred while committing offsets for group
audit.consumer.group
Can you ple
33 matches
Mail list logo