Usually in such case you should start from looking the logs : supervisor
and worker
On Wed, Dec 3, 2014 at 6:09 PM, clay teahouse clayteaho...@gmail.com
wrote:
Hello All,
I have this configuration:
spout - Bolt A (emits tuples) - Bolt B
Bolt A emits tuples successfully but bolt B stops
What's your topology overall latency ? Looks like you have really slow bolts
On Wed, Dec 3, 2014 at 9:02 PM, Mike Thomsen mikerthom...@gmail.com wrote:
I've found that our topology will fetch a large block of messages from
kafka and wait about 14-15 minutes before going back to kafka for more
Hi All,
We use Storm 0.82. Our throughput is 400K messages per sec.
From Storm UI we calculate that the total latency of all spouts and bolts =
~3.5 min to process 10 min of data. But in reality it takes 13 min!
Obviously it creates huge backlogs.
We don't have time-out failures at all. We don't
Hi
Sounds to me you need an ETL offline process MR/Shark offline to get the
processed data to db.
Storm fits the use cases when you have continous data stream and the
processing time with a low latency..
On 1 Dec 2014 04:26, Stadin, Benjamin
benjamin.sta...@heidelberg-mobil.com wrote:
Hi all,
Hi
Check that you don't have blocking code. For example writing to DB without
time-out or similar when you wait an operation to finish long or unlimited
time
Secondly I recomend you to use ackers ( with max spout pending parameter)
this way you can control the stream...
On 1 Dec 2014 05:54, 이승진
hi,
It's matter of fine tuning and depends on your topology , there is no one
gold number. Start from 1K , monitor it. If you see that your topology
supports higher throughput , increase it ... or if it's not, decrease it..
Vladi
On Sat, Nov 15, 2014 at 9:27 AM, Nilesh Chhapru
Hi,
Maybe you can override spout ack/fail method ant take it from there . In
case you use ackers
Vladi
On Fri, Nov 14, 2014 at 4:35 PM, Vadim Smirnov v2smir...@mail.ru wrote:
Unfortunate, it is all about as create metrics and read it outside. I am
want read metric value from Spout code.
Hi
It doesn't make sense to call both on the same tuple
Vladi
On Wed, Nov 12, 2014 at 11:19 PM, William Oberman ober...@civicscience.com
wrote:
A coworker I are debating this code:
--
try {
...
} catch(Exception e) {
collector.fail(tuple);
}
collector.ack(tuple);
Hi,
I know this scenario is a bit extreme but possible
If for some reason a tuple fails constantly , will Storm send it over again
and again ?
Is there some threshold to stop it ?
Thank you,
Vladi
Hi All,
We use Hector client to write to Cassandra , it's our last bolt.
It has relatively high latency: 5-7 ms (in Storm UI) . When we measure the
writes between Cassandra servers it shows 0.5 ms
So we suspect something wrong with Hector.
Does someone have the similar issue?
Any ideas ?
Hi ALL,
In our topology for some bolts (the bolts writing to DB/Cassandra) the
Execute Latency as twice higher than Process Latency . Does it mean that
acks in these bolts take the time equals to the bolt processing?
What can the reason for this? How we can reduce it?
Thank you,
Vladi
Hi All,
Is there any way to monitor or to see the status of 0mq and disruptor
queues ?
Is there way to know that the queues are full/empty for example?
Thank you in advance,
Vladi
H All,
We're experiencing very strange topology behavior when after a few days,
sometimes hours (looks like during a peak load) the spouts get stuck.
The data stops streaming and we lose a lot of data.
We read the data from kafka (use kafka spout). Storm version 0.8.2
Does someone have something
the execute method. This was
causing the messages in pending state and spout was not emitting as the
pending messages were not completed. Wondering if you have similar issues
in your code
On Tue, Oct 21, 2014 at 1:50 PM, Vladi Feigin vladi...@gmail.com wrote:
H All,
We're experiencing very
consider that out of 1, half of them failed for different
reasons, then looking in sigmund will still give you errors, however you
would not be able to pinpoint it to a specific tuple id.
--
*From:* Vladi Feigin vladi...@gmail.com
*Sent:* Monday, October 13
15 matches
Mail list logo