Hi all,
I'm having trouble figuring out how to make Storm log to a file. From my
understanding, the logback cluster.xml should specify the location of
output logs. However, taking a look at this file, it seems the output path
is specified as {storm.log.dir}. How can I find out what this value is?
After installing storm, look at storm/logback/cluster.xml under the
RollingFileAppender.
The log files are by default written to storm/logs folder
you can do a symbolic link before starting storm. For example if storm is
unzipped in /opt/storm and you would like to log to /mylogs
ln -s -f
Where is the storm/logs folder located? I don't see a logs folder in
storm/logs, where storm is the unzipped storm folder.
I also looked in ~/.m2/repository/org/apache/storm/ and do not see a logs
folder.
On Mon, Oct 27, 2014 at 12:35 AM, Itai Frenkel i...@forter.com wrote:
After installing
Hello again,
I have no progress so far, trying other topologies like the exclamation
example.
Am 10/24/2014 um 09:47 PM schrieb Benjamin Peter:
Submitting the topology works but it does not seem to do anything.
Emitted 0
UI Screenshots:
Overview
http://goo.gl/pgtHcH
(google drive link
With the current implementation of CombinerAggregatorCombineImpl, you
can modify and return the first value if you want to avoid the small
overhead of creating new instances (it adds up!)
It would be 100% safe if this behaviour was documented and contractual
for a CombinerAggregator. It's
What is the best way to capture start/end times of functions/aggregators in
Trident? I am interested in capturing elapsed times for each, but would prefer
not having to pass the info around in tuples, or write incrementally to a
database. Wondering if anyone else has done this and how.
I've got a couple of specific questions about tuple ordering and failing
tuples. Given a topology like so - spout S outputting to both B1 and B2, B2
outputs to B3:
/ B1
S
\ B2 - B3
If a tuple is emitted to both B1 and B2, and it is explicitly failed at B1
before the same tuple is processed by
Hi Daniel,
Have you tried using this project
https://github.com/miguno/wirbelsturm ?
here are notes on AWS and EC2
https://github.com/miguno/wirbelsturm/blob/master/docs/AWS.md
I find this tool to be very convinient for easy management of storm cluster
in local and AWS.
Hope this helps.
On
It seems to be a bug in storm unless someone confirms otherwise.
How can I file a bug for storm ?
On 25 Oct 2014 07:51, Devang Shah devangsha...@gmail.com wrote:
You are correct Taylor. Sorry missed to mention all the details.
We have topology.spout.max.pending set to 1000 and we have not
Hi,
I know this scenario is a bit extreme but possible
If for some reason a tuple fails constantly , will Storm send it over again
and again ?
Is there some threshold to stop it ?
Thank you,
Vladi
Hi all,
I have a bolt that registers a MultiCountMetric, with the time bucket set to 10
seconds. The metrics aren’t being sent until the bolt’s stream experiences a
lull. For example, if I emit 10 tuples into the stream, and the bolt takes 8
seconds to process them, then the metrics will be
storm 0.9.2-incubating upgraded a lot of library dependencies so it's
possible that there was a conflict somewhere. In general it would be good
practice to build your application against the version of storm that you
are planning to run.
On Mon, Oct 27, 2014 at 3:07 PM, Benjamin Peter
As Nathan said, it is up to the spout.
Most of the spouts I’ve worked on/with do not track the number of times a
specific tuple fails. With failures due to timeouts, you probably don’t want to
stop replaying them (it would lead to data loss by circumventing Storm’s
guaranteed delivery
Metrics operates on the same thread as the bolt, so this is the expected
behavior. We did it this way so that metrics would be very fast – updating,
reading, and emitting metrics require no locking or synchronization
whatsoever.
On Mon, Oct 27, 2014 at 11:42 AM, Jake Dodd j...@ontopic.io wrote:
Hi all
Can you pls share some links for detailed explanation of following and how
they implemented:
Acking framework.
Hooks
Grouping
Metrics
Trident interals
Hearbeat among nimbus supervisor and workers.
Thanks
Tarkesh
15 matches
Mail list logo