We did similar testing recently, newbie here, assuming you did async
publisher, did you also test with multiple partitions (1-1000) per topic as
well. More topics, implies more metadata per topic exchanged every minute,
more batches maintained and flushed per topic+partition per producer so
higher
Hi, I plan to use Kafka for event-based integration between services. And I
have two questions that bother me a lot:
1) What topic naming convention do you use?
2) Do you wrap your messages in some kind of envelope that contains metadata
like originating system info, timestamp, event type, uniq
Hi all,
Bit confused on rebalance and failures:
(if understand correctly about rebalance procedure)
Suppose during the middle of the rebalance, some consumer, C1, hits an
unclean shutdown (i.e. crashes, or kill -9), and the coordinator won't be
aware that C1 is dead until {zookeeper.session.timeou
Thanks for the confirmation Guozhang. I've submitted a PR along these
lines. https://github.com/apache/kafka/pull/1639
On Tue, Jul 19, 2016 at 3:50 PM, Guozhang Wang wrote:
> This is a good find. I think we should just include the api as compile
> dependency, and probably only log4j12 as test
This is a good find. I think we should just include the api as compile
dependency, and probably only log4j12 as test dependency. Similarly to
Kafka Clients and Connect:
org.slf4j
slf4j-log4j12
1.7.21
test
org.slf4j
slf4j-api
1.7.21
compile
Guozhang
On Tue, Jul 19, 2016 at 10:39 AM, Mathieu
Hello David,
Regex subscription is already added in trunk as per
https://issues.apache.org/jira/browse/KAFKA-3443 and will be available in
the 0.10.1.0 release.
If you want to try it out and experiment you can just build from current
trunk directly.
Regarding automatic checkpointing / load bala
Hello Nicolas,
If this missing matched record issue is mainly due to the order these two
streams were processed (e.g., say your corresponding changelog record was a
bit late compared with the record stream's record with the same key), you
can try to "hint" Kafka Streams library to give the changel
Hi,
I have been testing Kafka in order to determine how does number of partitions
affects performance. In order to do that, I have set up 2 Kafka nodes and 1
Zookeeper nodes, and used 8 producers running on different machines to send
messages.
First, my producers were sending messages to broker
Hello Guozhang and Akshat,
We are doing the SIGTERM / shutdown hook in our Kafka streams. We will write a
blog post when we are done. Thanks for everything help.
Phillp
From: Guozhang Wang
Date: Tuesday, July 19, 2016 at 2:06 PM
To: "users@kafka.apache.org"
Cc: Phillip Mann
Subject: Re: De
Hi Phillip,
You need to add the shutdown hook yourself when coding your application
using the Kafka Streams library.
We are updating our web documentations to add an example about how to do
that, but generally speaking it will be similar to what the Spark example
shows you.
Guozhang
On Mon, J
Hi,
Is it possible to configure Kafka Manager with multiple users with
different access control ?
https://github.com/yahoo/kafka-manager
For example :
User1 has below features
application.features=["KMClusterManagerFeature","KMTopicManagerFeature"]
User2 has below features
application.featur
You can use the ELK stack to push your logs to Kafka, and Kibana to
visualize
On Tue, Jul 19, 2016 at 11:04 AM, Mudit Kumar wrote:
> Hi,
>
> I want to push all my nginx logs to kafka.Any tool/suggestions for same.
>
> Thanks,
> Mudit
>
>
Hi,
I want to push all my nginx logs to kafka.Any tool/suggestions for same.
Thanks,
Mudit
Hello Kafka users,
I'm starting a new project and experimenting with kafka-streams. It's
pretty great, so far; thanks to everyone involved in the development.
I noticed that kafka-streams 0.10.0.0 has a dependency on slf4j-log4j12
(see:
https://repo1.maven.org/maven2/org/apache/kafka/kafka-stre
Acks all ... Having one Kafka broker only
On Tue, Jul 19, 2016, 9:22 AM David Garcia wrote:
> Ah ok. Another dumb question: what about acks? Are you using auto-ack?
>
> On 7/19/16, 10:00 AM, "Abhinav Solan" wrote:
>
> If I add 2 more nodes and make it a cluster .. would that help ? Have
>
Ah ok. Another dumb question: what about acks? Are you using auto-ack?
On 7/19/16, 10:00 AM, "Abhinav Solan" wrote:
If I add 2 more nodes and make it a cluster .. would that help ? Have
searched forums and all this kind of thing is not there ... If we have a
cluster then might be
If I add 2 more nodes and make it a cluster .. would that help ? Have
searched forums and all this kind of thing is not there ... If we have a
cluster then might be Kafka Server has a backup option and it self heals
from this behavior ... Just a theory
On Tue, Jul 19, 2016, 7:57 AM Abhinav Solan
No, was monitoring the app at that time .. it was just sitting idle
On Tue, Jul 19, 2016, 7:32 AM David Garcia wrote:
> Is it possible that your app is thrashing (i.e. FullGC’ing too much and
> not processing messages)?
>
> -David
>
> On 7/19/16, 9:16 AM, "Abhinav Solan" wrote:
>
> Hi Every
Is it possible that your app is thrashing (i.e. FullGC’ing too much and not
processing messages)?
-David
On 7/19/16, 9:16 AM, "Abhinav Solan" wrote:
Hi Everyone, can anyone help me on this
Thanks,
Abhinav
On Mon, Jul 18, 2016, 6:19 PM Abhinav Solan wrote:
>
Hi Everyone, can anyone help me on this
Thanks,
Abhinav
On Mon, Jul 18, 2016, 6:19 PM Abhinav Solan wrote:
> Hi Everyone,
>
> Here are my settings
> Using Kafka 0.9.0.1, 1 instance (as we are testing things on a staging
> environment)
> Subscribing to 4 topics from a single Consumer application
Having multiple brokers on the same node has a couple of problems for a
production installation:
1. You'll have multiple brokers contending for disk and memory resources
2. You could have your partitions replicated to the same node which means if
that node fails you would lose data.
I think you
Hi Nicolas,
your are right, it is currently not possible to get a result from a
KTable update (and this is by design). The idea is, that the KStream is
enriched with the *current state* of KTable -- thus, for each KStream
record a look-up in KTable is done. (In this sense, a KStream-KTable
join in
Thanks for correcting me, Tom. I got confused with warn log message.
On Tue, Jul 19, 2016 at 5:45 PM, Tom Crayford wrote:
> Manikumar,
>
> How will that help? Increasing the number of log cleaner threads will lead
> to *less* memory for the buffer per thread, as it's divided up among
> availabl
It would be easy to write a script that scrapes kafka-topics.sh output and
compute the topic-partitions that are out of sync. However, in my
experience underreplicated partition count and topic-partitions out of sync
are both very noisy signals. There is also expandisrpersec and
shrinkisrpersec jmx
Hi,
Comments inline.
On Mon, Jul 18, 2016 at 3:00 PM, cs user wrote:
> sasl.mechanism.inter.broker.protocol=SSL
>
This should be GSSAPI or PLAIN.
> sasl.enabled.mechanisms=PLAIN,SSL
>
Valid values for this are PLAIN and GSSAPI (unless you add your own SASL
mechanism).
In this scenario, I b
Manikumar,
How will that help? Increasing the number of log cleaner threads will lead
to *less* memory for the buffer per thread, as it's divided up among
available threads.
Lawrence, I'm reasonably sure you're hitting KAFKA-3587 here, and should
upgrade to 0.10 ASAP. As far as I'm aware Kafka do
Anderson,
The metric `UnderReplicatedPartitions` gives you the number of partitions
that are out of the ISR, but doesn't expose that per topic-partition. I'd
welcome the addition of that metric to Kafka (it shouldn't be that much
work in the source code I don't think), and I think others would as
Hi,
I'm using Kafka 0.10.0.0 with the Confluent platform 3.0.0
I manage to join a record stream (KStream / clicks stream) with a changelog
stream (KTable / an entity like a campaign related to a click for example).
When the entity in the KTable is inserted first (and the first time of
course) in
Hi,
I am trying to monitor under replicated partitions to create an alert
based on this metric. I want to get by topic,partition how many replicas
are out-of-sync, compared to the replication factor, instead of a
boolean value.
So someone can get an alert like:
Topic A, partition 0, has 2/3
You can't you only get a guarantee on the order for each partition, not
over partitions. Adding partitions will possible make it a lot worse, since
items with the same key wll land in other partitions. For example with two
partitions these will be about the hashes in each partitions:
partition-0: 0
Hi, I plan to use Kafka for event-based integration between services. And I
have two questions that bother me a lot:
1) What topic naming convention do you use?
2) Do you wrap your messages in some kind of envelope that contains metadata
like originating system info, timestamp, event type, uniq
31 matches
Mail list logo