Are there any garbage collection suggestions specific to Kafka or are the
standard java suggestions good?
This e-mail may contain information that is privileged or confidential. If you
are not the intended recipient, please delete the e-mail and any attachments
and notify us immediately.
Hi Folks,
Canarying traffic is an excellent way of reducing the impact when
releasing a new release with a bug. Such canarying is somewhat easier
with a few queueing backends like sqs & redis. In sqs for example each
application container/instance of canary can self regulate how much
throughput
Hi,
I'm just working through the Quickstart page.
https://kafka.apache.org/quickstart
As I'm working on Windows, I use MinGW to emulate Linux.
1) On MinGW, I get this error:
me@computer MINGW64 /c/java/kafka_2.12
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
DR issues?
>
> Thank you for your time
>
> Henning Røigaard-Petersen
>
> -Original Message-
> From: Ryanne Dolan
> Sent: 3. september 2018 20:27
> To: users@kafka.apache.org
> Subject: Re: Official Kafka Disaster Recovery is insufficient - Suggestions
> need
ue to the DR issues?
>
> Thank you for your time
>
> Henning Røigaard-Petersen
>
> -Original Message-
> From: Ryanne Dolan
> Sent: 3. september 2018 20:27
> To: users@kafka.apache.org
> Subject: Re: Official Kafka Disaster Recovery is insufficient -
>
Dolan
Sent: 3. september 2018 20:27
To: users@kafka.apache.org
Subject: Re: Official Kafka Disaster Recovery is insufficient - Suggestions
needed
Sorry to have misspelled your name Henning.
On Mon, Sep 3, 2018, 1:26 PM Ryanne Dolan wrote:
> Hanning,
>
> In mission-critical (and in
Sorry to have misspelled your name Henning.
On Mon, Sep 3, 2018, 1:26 PM Ryanne Dolan wrote:
> Hanning,
>
> In mission-critical (and indeed GDPR-related) applications, I've ETL'd
> Kafka to a secondary store e.g. HDFS, and built tooling around recovering
> state back into Kafka. I've had
Hanning,
In mission-critical (and indeed GDPR-related) applications, I've ETL'd
Kafka to a secondary store e.g. HDFS, and built tooling around recovering
state back into Kafka. I've had situations where data is accidentally or
incorrectly ingested into Kafka, causing downstream systems to process
I am looking for advice on how to handle disasters not covered by the official
methods of replication, whether intra-cluster replication (via replication
factor and producer acks) or multi-cluster replication (using Confluent
Replicator).
We are looking into using Kafka not only as a broker
are looking for your suggestions.
Thanks
Sukumar N
On 2017-06-29 20:34 (+0530), Hans Jespersen <h...@confluent.io> wrote:
> Request quotas was just added to 0.11. Does that help in your use case?
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-124+-+Request+rate+qu
g response will be delayed, which
> will result in hang the user-thread or app-servers.
>
>
>
> So we don't want to go for applying quota for our use case. Can you please
> share some suggestions to handle this use-case in our Kafka broker. Like,
> before messages get appended
.
So we don't want to go for applying quota for our use case. Can you please
share some suggestions to handle this use-case in our Kafka broker. Like,
before messages get appended to log, it should validate for throttling and if
it requires being throttled. Throttling mechanism should be either
Martini
__
From: Vincenzo D'Amore <v.dam...@gmail.com>
Sent: Saturday, December 3, 2016 12:14 PM
To: users@kafka.apache.org
Subject: Re: Suggestions
I found what's wrong, well... finally! Given that consumer
appli
I found what's wrong, well... finally! Given that consumer
application shoul load received data into a Solr instance.
Incidentally my version of Solr is SolrCloud, and Solrj client use
zookeeper, a different version of zookeeper...
Now I specified in my pom.xml the same version of zookeeper 3.4.8
>
> then, the strange thing is that the consumer on
> the second topic which stay in poll forever, *without receive any message*.
How long is 'forever'? Did you wait more than 5 minutes?
On Fri, Dec 2, 2016 at 2:55 AM, Vincenzo D'Amore wrote:
> Hi Kafka Gurus :)
>
> I'm
Can you use the console consumer to see the messages on the other topics?
> On Dec 2, 2016, at 04:56, Vincenzo D'Amore wrote:
>
> Hi Kafka Gurus :)
>
> I'm creating process between few applications.
>
> First application create a producer and then write a message into a main
Hi Kafka Gurus :)
I'm creating process between few applications.
First application create a producer and then write a message into a main
topic (A), within the message there is the name of a second topic (B). Then
promptly create a second producer and write few message into the new topic
(B).
I
@kafka.apache.org
Subject: 0.10 Metrics Reporter Suggestions
Hello,
We’re looking into the functionality of Metrics Reporters for producers and
consumers in Kafka 0.10. Are there any projects that can be recommended that
seem promising; specifically involving sending metrics to either StatsD or
Graphite
Hello,
We’re looking into the functionality of Metrics Reporters for producers and
consumers in Kafka 0.10. Are there any projects that can be recommended that
seem promising; specifically involving sending metrics to either StatsD or
Graphite?
As always, thank you for your help!
Lawrence
Kafka Connect can definitely be used for this -- it's one of the reasons we
designed it with standalone mode (
http://docs.confluent.io/3.0.0/connect/userguide.html#workers). For the
specific connector, we include a very simple File connector with Kafka
which will just take each line and send it
As a newbie, just setup my first Kafka 3 node, each in its own host cluster
and its own ZK. Everything went fine, I could see three brokers registered
in all three ZK /brokers/ids until I created a topic with this exception:
~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181
On Tue, May 3, 2016 at 1:00 PM Banias H wrote:
> I should add Flume is not an option for various reasons.
>
> On Tue, May 3, 2016 at 2:49 PM, Banias H wrote:
>
> > We use Kafka (0.9.x) internally in our pipeline and now we would like to
> > ingest
I should add Flume is not an option for various reasons.
On Tue, May 3, 2016 at 2:49 PM, Banias H wrote:
> We use Kafka (0.9.x) internally in our pipeline and now we would like to
> ingest application logs sitting in local file system of servers external to
> the Kafka
We use Kafka (0.9.x) internally in our pipeline and now we would like to
ingest application logs sitting in local file system of servers external to
the Kafka cluster.
We could write a producer program running on the application servers to
push files to Kafka. However we wonder if we can leverage
Hi All,
When Kafka is running on kerberoized cluster/ SSL. Can we add an option
security.protocol. So, that user can given PLAINTEXT, SSL, SASL_PLAINTEXT,
SASL_SSL. This will be helpful during running console producer and console
consumer.
./bin/kafka-console-producer.sh --broker-list --topic
hnology stack to use would be most suitable so that I can generate the
> highest amount of load with the least amount of load generating
> machines/hardware. Using threads, processes, or whatever. node.js, python,
> scala, ruby, java, .net, etc.
>
> thoughts, suggestions appreciated,
> David
>
my question is
> what
> > technology stack to use would be most suitable so that I can generate the
> > highest amount of load with the least amount of load generating
> > machines/hardware. Using threads, processes, or whatever. node.js,
> python,
> > scala, ruby, java, .net, etc.
> >
> > thoughts, suggestions appreciated,
> > David
> >
>
load generating
machines/hardware. Using threads, processes, or whatever. node.js, python,
scala, ruby, java, .net, etc.
thoughts, suggestions appreciated,
David
I could go either way. I think that if ZK is up, then Kafka's going to go
crazy trying to figure out who's the master of what, but maybe I'm not thinking
the problem through clearly.
That does beg the issue: it seems like it'd be good to have something
written down somewhere to say how
So... we had an extensive recabling exercise, during which we had to shut
down and derack and rerack a whole Kafka cluster. Then when we brought it back
up, we discovered the hard way that two hosts had their rebuild on reboot
flag set in Cobbler.
Everything on those hosts is gone as a
and had two
suggestions for improving the documentation at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330:
1. Move Note that the --zkconnect argument should point to the source
cluster's ZooKeeper...” above the console output section, I missed this
last line
, 2014 at 9:56 PM, Daniel Compton d...@danielcompton.net
(mailto:d...@danielcompton.net)
wrote:
Hi
I was following the instructions for Kafka mirroring and had two
suggestions for improving the documentation at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
...@danielcompton.net)
wrote:
Hi
I was following the instructions for Kafka mirroring and had two
suggestions for improving the documentation at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330:
1. Move Note
Hi
I was following the instructions for Kafka mirroring and had two
suggestions for improving the documentation at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330:
1. Move Note that the --zkconnect argument should point to the source
cluster's ZooKeeper...” above
Thanks Daniel for the findings, please feel free to update the wiki.
Guozhang
On Tue, Jun 17, 2014 at 9:56 PM, Daniel Compton d...@danielcompton.net
wrote:
Hi
I was following the instructions for Kafka mirroring and had two
suggestions for improving the documentation at
https
35 matches
Mail list logo