Denis,
I suggest its better to have your http requests going to kafka
and than use Storm's KafkaSpout to process. This allow you to
not loose any events as KafkaSpout can do replays of the message
incase if there is a failure in your topology.
-Harsha
On Mon, Jan
Hello:
I have a three node kafka cluster with a single topic and a topic
replication factor of 3. I ran a test where I inserted a few hundred
messages into kafka. While the topology was reading these messages, I
killed one of the brokers.
My hope was that the kafka spout would simply use one of
three node kafka cluster with replication factor of 3 is bad design.
It should always be less than the cluster size.
Please change it to 2 and try again.
-Manoj
On Mon, Jan 26, 2015 at 7:19 AM, Milad Fatenejad ick...@gmail.com wrote:
Hello:
I have a three node kafka cluster with a single
Hello:
I reran my test with a replication factor of 2 but encountered the same
issue...any other suggestions?
Thanks
Milad
On Mon, Jan 26, 2015 at 1:06 PM, Manoj Jaiswal manoj.jaiswa...@gmail.com
wrote:
three node kafka cluster with replication factor of 3 is bad design.
It should always
Hello storm-users,
I had deployed successfully couple times to AWS ec2 instance.
But since last friday, when I try to setup a new cluster, I keeping failing due
to Ganglia setup problem.
I’ve looked https://groups.google.com/forum/#!topic/storm-user/xRB1zMwT-fY this
report.
But Ben’s
Hi all,
I would like to implement a topology where the spout receives http
request. Is there a code sample that can be a good starting point for my
implementation? Is kafka designed for this use case?
Thanks for your help.
D. Debarbieux
---
L'absence de virus dans ce courrier électronique a
Hello Harsha:
I am mainly just using the default settings...
BrokerHosts zk = new ZkHosts(zkConnect);
SpoutConfig spoutConfig = new SpoutConfig(zk, kafkaTopic,
/kafkaStorm, spoutComponentId);
spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());