Nikos,
Thanks for your reply.
I tried many value from low to high of max.spout.pending but not getting
throughput more than 7000/s.
I tried turning internal buffer but no improvement.
Did you tried any benchmark test on storm kafkaspout ? how much you got?
given the code so that I can run same
thanks i got it , i restarted again and got the right number but from the
code .
thanks alot for you
On Sun, Apr 17, 2016 at 4:34 PM, Matthias J. Sax wrote:
> Do you use LocalCluster? For this, storm.yaml is ignored... If you want
> to set JVM arguments, you need to do this
Hello!
I would like to know how Storm manages the internal buffer queues when
using the collector.emit(streamId,ValusToEmit)?
For example considering the topology
1. Spout->ProcessingBolt
2. spout.collector.emit(streamId1, tupleValues1)
spout.collector.emit(streamId2, tupleValues2)
Q. How the
i restarted but still not work
On Sun, Apr 17, 2016 at 3:21 AM, Erik Weathers
wrote:
> No, I don't think there's any facility for the storm daemons to re-read
> their storm.yaml config. Just restart it...
>
> On Sat, Apr 16, 2016 at 6:10 PM, sam mohel
What type of spout is it? How many spout tasks do you have?
Maxspoutpending seems pretty high so its possible the tuples could be
toming out in the queue and if the spout isnt reliable, or if acking is
disabled, they will be discarded
On Friday, April 15, 2016, wrote:
Which "storm.yaml" file did you change? Each supervisor has it's own
copy of it (at least usually; or do you have a NFS and storm.yaml is
shared over all nodes in the cluster?), and you need to change it in all
of them.
Not sure, why the value of your Config object did not get picked up...
Did