Maybe you could take a thread dump and check if storm's executor threads
are stuck on a blocking call.
On Wed, Sep 24, 2014 at 2:24 AM, Tomas Mazukna tomas.mazu...@gmail.com
wrote:
config.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 60);
submit topology and after about 4 hours tick tuples
The solution is to use a Singleton to hold the pool. Guard the
initialization of Singleton such that when the 'N' tasks on a worker try to
initialize it, only one of them succeeds and the rest of them see that the
pool is already initialized. This you would do once in the prepare()
method. There
Yes, the defaults mentioned in the blog worked for me too.
On Wed, Aug 27, 2014 at 2:49 AM, Kushan Maskey
kushan.mas...@mmillerassociates.com wrote:
These changes did help a lot. Now I dont see any failed acks for alsmot
70k data load. Thanks a lot for your help.
--
Kushan Maskey
I would suspect that at some point the rate at which the spouts emitted
exceeded the rate at which the bolts could process. Maybe you could look at
configuring the buffers (if you haven't yet done that). Do your records get
processed at a constant rate?
On Tue, Aug 26, 2014 at 4:12 AM, Kushan
/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions
[2] https://github.com/aphyr/jepsen
On Aug 5, 2014, at 8:39 PM, Srinath C srinat...@gmail.com wrote:
I have seen this behaviour too using 0.9.2-incubating.
The failover works better when there is a redundant node available
I have seen this behaviour too using 0.9.2-incubating.
The failover works better when there is a redundant node available. Maybe 1
slot per node is the best approach.
Eager to know if there are any steps to further diagnose.
On Wed, Aug 6, 2014 at 5:43 AM, Vinay Pothnis vinay.poth...@gmail.com
You could use tick tuples https://coderwall.com/p/l2vl-w to do that. The
bolt can be configured to receive periodic ticks which you can use to do
the batch insert.
On Fri, Jul 11, 2014 at 1:35 PM, 이승진 sweetest...@navercorp.com wrote:
Hi all,
Assume that there are one spout and one bolt.
Yes, it can be applied to specific bolts not to all bolts in the topology.
You have to specify the property on the Config object passed to the
topology builders.
On Fri, Jul 11, 2014 at 1:59 PM, 이승진 sweetest...@navercorp.com wrote:
great, I think I have to try this right away.
so the tick
On Wed, Jul 9, 2014 at 11:34 AM, Preetam Rao rao.pree...@gmail.com wrote:
Hi
Appreciate any pointers on the following which are causing us problems in
production.
1. Is there a way we can restrict multiple instances of a given spout be
allocated on different hosts ? Our spouts start a
Is this happening when there are a lot of tuples emitted? I suspect its
because of buffers getting filled.
Check the capacity of the bolt to which these tuples are getting
transferred to (in the metrics page on storm ui).
Once you confirm that, try increasing the buffers for the executors.
On
Hi Nima,
Use the reliable message processing
https://github.com/nathanmarz/storm/wiki/Guaranteeing-message-processing
mechanism
to ensure that there is no data loss. You would need support for
transactional semantics from the tuple source where spout can commit/abort
a read (kestrel, kafka,
Hi Shaikh,
You may want to configure your internal buffers. See this blog -
http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/
Increase the parallelism of your spouts and bolts depending on the
system configuration. This is not simple and will require
who produce that!
DSK | vdc | busy 91% | read 0 | write927 |
KiB/r 0 | | KiB/w 4 | MBr/s 0.00 | MBw/s
0.75 | avq 1.00 | avio 4.88 ms
Regars,
Andres
El 28/05/2014, a las 04:01, Srinath C srinat...@gmail.com escribió:
Apart from
Thats not true. The tuples are ack'd as soon as all the tuples in the tuple
tree is ack'd.
On Fri, May 30, 2014 at 6:58 AM, Phil Burress philburress...@gmail.com
wrote:
Stupid question here perhaps... but I've noticed that the Spout in a
Topology doesn't get ack'd until
Apart from the autopurge options, also set the number of transactions after
which a snapshot is taken (snapCount). This number should be set depending
on the rate of updates to zookeeper.
On Wed, May 28, 2014 at 12:32 AM, Danijel Schiavuzzi dani...@schiavuzzi.com
wrote:
You have to configure
#10 - 5 points
On Sat, May 17, 2014 at 6:18 AM, Jason Jackson jasonj...@gmail.com wrote:
#10 - 5 points.
On Fri, May 16, 2014 at 1:34 PM, Brian Enochson brian.enoch...@gmail.com
wrote:
#10 - 3 Points.
#1 - 1 Point
#2 - 1 Point
Thanks,
Brian
On Thu, May 15, 2014 at
No, this is not possible.
On Tue, May 13, 2014 at 1:50 PM, Amikam Snir amikams...@gmail.com wrote:
Hi all,
Is there any way to change topology at runtime?
For example:
1. Adding new spout instance and wiring it?
2. Changing the amount of defined spouts bolts?
.
On Mon, May 12, 2014 at 6:57 AM, Srinath C srinat...@gmail.com wrote:
Hi,
I'm facing a strange issue running a topology on version
0.9.1-incubating with Netty as transport.
The topology has two worker processes on the same worker machine.
To summarize the behavior, on one
Adding new spouts/bolts and rewiring of topology cannot be done using
rebalance.
Only the number of executors can be changed.
On Tue, May 13, 2014 at 3:13 PM, Xing Yong xyong...@gmail.com wrote:
using strom client command line tools,see the command ‘reblance’
2014-05-13 16:20 GMT+08:00
Hi,
I'm facing a strange issue running a topology on version
0.9.1-incubating with Netty as transport.
The topology has two worker processes on the same worker machine.
To summarize the behavior, on one of the worker processes:
- one of the bolts are not getting executed: The bolt
an executed consumes from that queue.
On Mon, May 12, 2014 at 8:29 PM, Srinath C srinat...@gmail.com wrote:
Hi Padma,
See inline for my replies
On Mon, May 12, 2014 at 4:39 PM, padma priya chitturi
padmapriy...@gmail.com wrote:
Hi,
Few questions on your issue:
1. As soon as you
Hi Jon,
Can you share the exceptions related to zookeeper? Are you doing some
heavy network activity during prepare?
On my topology I see one connection established to zookeeper from every
worker process and the supervisor.
And as far as I know there are a some writes every few seconds
Hi Weide,
I haven't found any material on this matter. But as far as I could figure
out, the strategy seems to try to evenly divide the total number of
executors among the worker processes. If a particular spout/bolt has
multiple executors, it tries to span them across all the worker processes.
Once you lose the zookeeper quorum, I have seen that the workers keep
throwing exceptions that they are not able to connect to the zookeeper. But
I haven't seen them die because of this. I have even seen them recover once
the quorum is restored.
But if the worker process gets killed, they don't
There is no ack() or fail() on a BaseRichBolt. I'm not sure I understand.
On Fri, Apr 18, 2014 at 11:24 PM, 羅以豪 sealleo2...@gmail.com wrote:
Hi guys, I've been stuck with this problem for days.
I run storm 0.9.0.1 and I also follow some rules from official guide for
message handle, such
not quite sure why your solution would remedy
the solution (seems like there are more tuples in flight in the system),
but it's great that you could provide a working setup.
Michael
On Tue, Apr 15, 2014 at 11:33 PM, Srinath C srinat...@gmail.com wrote:
Hi Michael,
I experimented a bit
Hi Otis,
The type of spouts and bolts and the inter-connectivity between them is
fixed. The number of (tasks) spouts and bolts is configurable but is also
fixed for the lifetime of the topology. The topology should be re-deployed
if it changes. However, the number of threads that work with
chunk is grabbed. So the whole jar does need to be stored in RAM.
On Tue, Apr 8, 2014 at 6:14 PM, Srinath C srinat...@gmail.com wrote:
Thanks for the reply Jason.
Supervisor doesn't need it in the classpath. But to provide the classpath
to the worker processes it must be transferring the jar
Hi,
I'm trying to figure out a reasonable amount of heap to grant to the
supervisor process on the storm worker machines. What are the factors that
must be considered? I'm thinking - size of the topology jar and number of
slots should be considered. But would like to hear if anyone was able to
Vinay,
The exception is raised from *KryoTupleSerializer*, so one of the
values in your tuple directly or indirectly reference instances of
*ByteArrayLongString.* This is a class from the RabbitMQ client library.
One of the possibilities could be that you are adding all client
/documentation/Concepts.html
Search there for Stream groupings
On Thu, Mar 20, 2014 at 10:11 AM, Srinath C srinat...@gmail.com wrote:
Anyone?
On Wed, Mar 19, 2014 at 6:35 AM, Srinath C srinat...@gmail.com wrote:
Hi,
Can anyone point me to some notes on how storm decides to distribute
There configs are interesting but undocumented on wiki.
Thanks for the info.
On Fri, Mar 21, 2014 at 10:37 PM, Drew Goya d...@gradientx.com wrote:
Take a look at topology.optimize and storm.scheduler
I had the same issue and I found that setting topology.optimize to false
and
Anyone?
On Wed, Mar 19, 2014 at 12:03 PM, Srinath C srinat...@gmail.com wrote:
Hi,
I'm facing an issue with acknowledgement of tuples.
My topology has a spout (BaseRichSpout) that picks a message from RabbitMQ
and emits it with the deliveryTag as the message Id. The tuple
Yi, I haven't seen this error, but I think you definitely need a better
instance than t1.micro.
I was able to successfully get it up and running with m1.large after
failing to bring it up on m1.small.
But you could try m1.medium and see if that works for you.
On Fri, Mar 21, 2014 at 3:14 AM,
Hi,
I'm facing an issue with acknowledgement of tuples.
My topology has a spout (BaseRichSpout) that picks a message from RabbitMQ
and emits it with the deliveryTag as the message Id. The tuple is then
received by a bolt (BaseRichBolt) which processes the tuple. There are
other spouts and
You can have Bolt1 do:
public void execute(Tuple input) {
Class1 class1Instance = (Class1)input.getValueByField(JmsSpoutTuple);
collector.emit(new Values(class1Instance.class1Key1, class1Instance)); //
emit 2 values - class1Key1 and class1Instance
}
Then do field grouping for class1Key1 into
in the ack/fail
methods of the spout (so that you know what was acked/failed)
On Tue, Mar 18, 2014 at 6:01 PM, Srinath C srinat...@gmail.com wrote:
Hi,
I was unable to figure out if the messageId of a tuple emitted from a
spout should be globally unique? Or does storm identify a tuple
Hi,
I was unable to figure out if the messageId of a tuple emitted from a
spout should be globally unique? Or does storm identify a tuple with a
combination of spout name, spout task Id and messageId?
Thanks,
Srinath.
Hi,
Can anyone point me to some notes on how storm decides to distribute the
tasks among its workers. The behavior am seeing is that all tasks of a
particular type are being grouped into one worker process.
To add more details to my use-case, I have a spout that is sourcing
tuples from a
39 matches
Mail list logo