Thanks Taylor.
I will check it out.
Richards Peter.
When Running topologies in local mode if a worker dies then jvm crashes .
Is there a way to notify this to the tests.
--
*With Regards *
*RAHUL MITTAL*
Hi all,
I have a small confusion. I am using triedent for my application.
i have one flow where :
spout1--bolt1-bolt2bolt3
spout2---bolt1
spout3-bolt1
I have above topology where spout1,spout2 and spout3 is connected to bolt1.
I want to ask
You would need to design a custom scheduler:
http://xumingming.sinaapp.com/885/twitter-storm-how-to-develop-a-pluggable-scheduler/
On Wed, Aug 6, 2014 at 5:08 AM, Spico Florin spicoflo...@gmail.com wrote:
Hello!
I have a use case where I need that two bolts should be colocated
either on
I see that your zookeeper is listening on port 2000. Is that how you have
configured the zookeeper?
--
Kushan Maskey
817.403.7500
On Tue, Aug 5, 2014 at 11:56 AM, Sa Li sa.in.v...@gmail.com wrote:
Thank you very much, Marcelo, it indeed worked, now I can run my code
without getting error.
Hi Andrey,
I had a lot of troubles with Storm Netty (basically multiple workers not
working) and I stil haven't found why.
It turns out that I have a lot of Cassandra communication in my code
(Astyanax).
So I am interested in the issue you faced; could you give a little bit more
details about
I had so many problems with netty earlier. I removed the 3.2.2 verision out
from the storm 0.9.2 package and is much better.
--
Kushan Maskey
817.403.7500
On Wed, Aug 6, 2014 at 9:01 AM, Romain Leroux leroux@gmail.com wrote:
Hi Andrey,
I had a lot of troubles with Storm Netty (basically
+1 for failure testing. We have used other similar tools in the past to
simulate different situations like network cuts, high packet loss, etc. I
would love to see more of this happen, and the scheduler get smart enough to
detect these situations and deal with them.
- Bobby
From: P. Taylor
You can try googling storm pluggable scheduler and use the google cached
version of the page. Also the github link (this one?
https://github.com/xumingming/storm-lib/blob/master/src/jvm/storm/DemoScheduler.java)
works for me.
-Nathan
On Wed, Aug 6, 2014 at 10:26 AM, Spico Florin
Hmm, I am trying to figure out what I can share to reproduce this.
I will try this with a simple topology and see if this can be reproduced. I
will also try Srinath's approach of having only one worker/slot per node
and having a spare. If that works, I would have a somewhat launchable
scenario
Investigated a bit and figured from top command that 1 or 2 out of the 4
topologies are running at full speed while others are starving for cpu.
Even stranger observation was that even after stopping the topologies
running properly, the other topologies did not pick up.
Has anyone seen any such
Can you set zkPort in SpoutConfig to 2181 in your topology builder and see
if that helps?
--
Kushan Maskey
817.403.7500
On Wed, Aug 6, 2014 at 2:34 PM, Sa Li sa.in.v...@gmail.com wrote:
Hi, Kushan
You are completely right, I noticed this after you mentioned it,
apparently I am able to
You are running in local mode. So storm will start an in-process zookeeper for
it’s own use (usually on port 2000). In distributed mode, Storm will connect to
the zookeeper quorum specified in your storm.yaml.
In local mode, you would only need the external zookeeper for kafka and the
kafka
Thanks, Taylor, that makes sense, I check my kafka config, the
host.name=10.100.70.128,
and correspondingly change the spout config as
BrokerHosts zk = new ZkHosts(10.100.70.128);
TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, topictest);
it used to be localhost, actually
You have two different versions of zookeeper on the classpath (or in your
topology jar).
You need to find out where the conflicting zookeeper dependency is sneaking in
and exclude it.
If you are using maven 'mvn dependency:tree' and exclusions will help.
-Taylor
On Aug 6, 2014, at 6:14 PM,
Thanks Vinay. That seemed to work fine for me, but let me re-test it.
Taylor, I'm away from lab resources right now. But once am able to, i'll
run some tests will report back with debug logs for nimbus, supervisor and
worker on a sample topology. In my case there were tuples being processed
at
16 matches
Mail list logo