Thanks everyone for the help. I found the issue. Turns out it was something
completely unrelated. Someone had set up a JVM option to run a monitoring
application of their own on top of Storm and every time anyone was
submitting a topology, this application was also executed on the whole
cluster, le
Hello again,
I'd like to chip in @Nikolas but what you will have to do... you will
probably not like... I really think this is not storm's fault... that would
be really weird. Additionally the jvm that you use for local execution are
the same as the ones in your cluster? Now... the best way to kn
Hi again Yury.
Thanks for the help and the references. I will have a look.
My spout is quite simple. Here is the code :
public void nextTuple() {
nodeIds[0] = random.nextInt(urlsNum);
nodeIds[1] = random.nextInt(urlsNum);
sent++;
collector.emit("FirstNodeStream", new Values(nodeIds[0],
n
Yes, I suggest you to try and spot the problem by looking at the dump of a
workers that throws the exception. That way you could at least be certain
about what consumes workers memory.
Oracle HotSpot has a number of options controlling GC logging, setting them
for worker JVMs may help in troublesh
Hi Yury.
1. I am using Storm 0.9.5
2. It is a BaseRichSpout. Yes, it has acking enabled and I ack each tuple
at the end of the "execute" method of the bolt. I see tuples being acked in
Storm UI.
3. Yes I observe memory usage increasing (which eventually leads to the
topology hanging) even in my du
Hi Nick,
Some questions:
1. Well, what version of Storm are you using? :)
2. What is the spout you are using? Is this spout reliable, i. e. does it
use message ids to have messages acked/failed by downstream bolts? Do you
have acker enabled for your topology? If it is unreliable or does not have
Thanks for all the replies so far. I am profiling the topology in local
mode with VisualVm and I do not see this problem. I am still running to
this problem when the topology is deployed on the cluster, even with
max.spout.pending = 1.
On Wed, Jan 13, 2016 at 10:38 PM, John Yost wrote:
> +1 for
+1 for Andrew, definitely agree profiling with jvisualvm or whatever is
definitely something to do if you have not done already
On Wed, Jan 13, 2016 at 3:30 PM, Andrew Xor
wrote:
> Hey,
>
> Care to give version of storm/jvm? Does this happen on cluster execution
> only or when also running the
Hey,
Care to give version of storm/jvm? Does this happen on cluster execution
only or when also running the topology in local mode? Unfortunately,
probably the best way to find what's really going on is to profile your
topology... if you can run the topology locally this will make things quite
a
Hi Nikolaos,
Maybe try experimenting with max.spout.pending. You may have a buildup of
tuples due to a high max.spout.pending. Would check capacity of each bolt,
find which one(s) are ~ 1, add more executors for those, and see how things
look then.
--John
On Wed, Jan 13, 2016 at 3:06 PM, Nikolao
Hello,
I am implementing a distributed algorithm for pagerank estimation using
Storm. I have been having memory problems, so I decided to create a dummy
implementation that does not explicitly save anything in memory, to
determine whether the problem lies in my algorithm or my Storm structure.
In
11 matches
Mail list logo