We have several topologies that should run continuously without halting.
1. how to release next version of topology without halting?
In some cases, we can run two same topologies in a cluster so that when running
next version of topology jar, we can restart one by one not to halt entire
servi
When I tried to clone the tag v0.9.3, its actually 0.9.5-snapshot. So I
downloaded the 0.9.3 source, manually applied the change, built and deployed.
But now that 0.9.5 is out…..we should be using 0.9.5 rather than the patch.
Thanks for all the help.
From: Srividhya Shanmugam [mailto:srividhyas
The Apache Storm community is pleased to announce the release of Apache Storm
version 0.9.5.
Storm is a distributed, fault-tolerant, and high-performance realtime
computation system that provides strong guarantees on the processing of data.
You can read more about Storm on the project website:
One correction, my msg size is about 3k each.
I did another round for comparison. I disabled acking all together, still
the throughput is only slightly better at 12k tuples/s. So I used kafka's
console consumer from one of the cluster nodes (different from where the
partition is located) in order
Try below as it will print count by batch .
TridentTopology topology = new TridentTopology();
topology.newStream("cdrevent", new CSVSpout("testdata.csv", ',',
false)).partitionBy(new Fields("field_1")).
groupBy(new Fields("field_1"))
.aggregate(new Fields("field_1"), new Count(),new Fields("coun
Hi All,
I would like to know how many tuples are getting processed in a batch... is
there a way to do so..
may be something i can code in execute() method and print in log?
Regards,
Nitin Kumar Sharma.
Hi,
You are looping within "nextTuple()" to emit a tuple for each lines for
the whole file. This is "bad practice" because the spout is prevented to
take "acks" while "nextTuple()" is executing. I guess, that is the
reason why your tuples time out and fail.
You should return from "nextTuple()" af
I am using a bolt to read data from a text file, and send them to a bolt.
However i am loosing too many values, the Fail.
I am newbie to storm and i don’t know where to look to debug this issue. Any
ideas??
My Storm Spout code is:
package tuc.LSH.storm.spouts;
import backtype.storm.spout.Spou
On 04/06/2015 14:47, Spico Florin wrote:
I hope that these help.
Yes thank you.
It seems that the policy is implemented as code (e.g. the name
"special-supervisor" is hard coded)
Is there no framework by which some bolt X can declare that it must run
on supervisor with metadata Y - or is Ja
Hi!
I had a same case that you have mentioned. What I have done:
1. Create a scheduler class (see the attached file)
2. On the Nimbus node, in the $STORM_HOME/conf/storm.yaml add the following
lines
storm.scheduler: "NetworkScheduler"
supervisor.scheduler.meta:
name: "special-supervisor"
3. On t
+1 for dropping 1.6 support in 0.10.0.
Mark
On 2 June 2015 at 01:10, Javier Gonzalez wrote:
> No objection here. I work at a big company where upgrades move fast like
> glaciers ;) and even we are up to java7.
>
> Regards,
> JG
>
> On Mon, Jun 1, 2015 at 2:37 PM, P. Taylor Goetz wrote:
>
>> C
Yes sure , this is the pastebin link : http://pastebin.com/fSzmcceC
I had the same problem with the python implementation(splitSentence.py) so i
don't think that is a implementation problem.
De : 임정택
Envoyé : mercredi 3 juin 2015 22:32
À : user@storm.apache.
12 matches
Mail list logo