Hmm...yes that's a better idea.
On Tue, May 10, 2016 at 3:12 PM, Matthias J. Sax wrote:
> I am not sure if NimbusClient works well with LocalCluster. My
> suggestion was based on the assumption, that you run in a real cluster.
>
> There would be LocalCluster.killTopology(); maybe you should use
I am not sure if NimbusClient works well with LocalCluster. My
suggestion was based on the assumption, that you run in a real cluster.
There would be LocalCluster.killTopology(); maybe you should use this
method instead of NimbusClient.kill().
Using LocalCluster, I usually use the following patte
Turns out, using nimbus.seeds was sufficient.
*import org.apache.storm.utils.NimbusClient;import
org.apache.storm.utils.Utils;Map conf =
Utils.readStormConfig();conf.put("nimbus.seeds",
"localhost");NimbusClient cc =
NimbusClient.getConfiguredClient(conf);
My bad.
The parameter is called "nimbus.seeds" (former "nimbus.host") and not
"nimbus.leader".
And I guess, "build/libs" is not your working directory. (See you IDE
setting of your run configuration.)
In doubt, include a "System.out.println(new File().getAbsolutePath());"
(or similar) in your bo
*@Spico: *The code as promised:
http://nrecursions.blogspot.in/2016/05/more-concepts-of-apache-storm-you-need.html#morecreativetopologystructures
*@Matthias:* Still no luck. I tried this in the bolt code:
Map conf = Utils.readStormConfig();
conf.put("nimbus.leader", "localhost");
Also tried alteri
Utils.readStormConfig() tries to read "./storm.yaml" from local disc
(ie, supervisor machine that executes the bolt) -- as it is using
"working-directory" a guess it does not find the file, and thus value
"nimbus.host" is not set.
Make sure that storm.yaml is found be the worker, or set nimbus.hos
@Spico: Will share.
The streams implementation is working beautifully.
Only the topology killing is failing.
*Tried:*
Map conf = Utils.readStormConfig();
NimbusClient cc =
NimbusClient.getConfiguredClient(conf);
Nimbus.Client client = cc.getClient();
client.killTopology("myStorm");
*I get these
Hi!
You welcome Navine. I'm also interested in the solution. Can you please
share your remarks and (some code :)) after the implementation?
Thanks.
Regards,\
Florin
On Mon, May 9, 2016 at 7:20 AM, Navin Ipe
wrote:
> @Matthias: That's genius! I didn't know streams and allGroupings could be
> us
@Matthias: That's genius! I didn't know streams and allGroupings could be
used like that.
In the way Storm introduced tick tuples, it'd have been nice if Storm had a
native technique of doing all this, but the ideas you've come up with are
extremely good. Am going to try implementing them right awa
Alternative is to use a control message on a separate stream that goes to
all bolt tasks using all grouping.
On May 8, 2016 3:20 PM, "Matthias J. Sax" wrote:
> To synchronize this, use an additional "shut down bolt" that used
> parallelism of one. "shut down bolt" must be notified by all parallel
To synchronize this, use an additional "shut down bolt" that used
parallelism of one. "shut down bolt" must be notified by all parallel
DbBolts after they performed the flush. If all notifications are
received, there are not in-flight message and thus "shut down bolt" can
kill the topology safely.
hi!
there is this solution of sending a poison pill message from the spout.
on bolt wil receiv your poison pill and will kill topology via storm storm
nimbus API. one potentential issue whith this approach is that due to your
topology structure regarding the parralelism of your bolts nd the time
You can get the number of bolt instances from TopologyContext that is
provided in Bolt.prepare()
Furthermore, you could put a loop into your topology, ie, a bolt reads
it's own output; if you broadcast (ie, allGrouping) this
feedback-loop-stream you can let bolt instances talk to each other.
buil
@Matthias: I agree about the batch processor, but my superior took the
decision to use Storm, and he visualizes more complexity later for which he
needs Storm.
I had considered the "end of stream" tuple earlier (my idea was to emit 10
consecutive nulls), but then the question was how do I know how
You might want to check out Storm Signals.
https://github.com/ptgoetz/storm-signals
It might give you what you're looking for.
On Sat, May 7, 2016, 11:59 AM Matthias J. Sax wrote:
> As you mentioned already: Storm is designed to run topologies forever ;)
> If you have finite data, why do you no
As you mentioned already: Storm is designed to run topologies forever ;)
If you have finite data, why do you not use a batch processor???
As a workaround, you can embed "control messages" in your stream (or use
an additional stream for them).
If you want a topology to shut down itself, you could
Hi,
I know Storm is designed to run forever. I also know about Trident's
technique of aggregation. But shouldn't Storm have a way to let bolts know
that a certain bunch of processing has been completed?
Consider this topology:
Spout-->Bolt-A-->Bolt-B
| |--->Bo
17 matches
Mail list logo