fusing... Xss indicates stack size at each
>>> thread level. Increasing it means that each thread effectively has more
>>> memory for stack and hence overall requirements for memory increase. Am not
>>> really sure how increasing stack size allows for more thread crea
; really sure how increasing stack size allows for more thread creation. Will
> test this part and confirm back
>
> Thanks
> Kashyap
> On Jan 7, 2016 02:58, "Anishek Agarwal" wrote:
>
>> Yeh I have encountered the same error, and as you said its because there
>> is a
encounter this error in your use case?
>
> Thanks
> Kashyap
> On Jan 6, 2016 02:32, "Anishek Agarwal" wrote:
>
>> Hey Kashyap,
>>
>> There seems to be a lot of threads that are created per bolt thread
>> within storm for processing. for ex we have a
Hey Kashyap,
There seems to be a lot of threads that are created per bolt thread within
storm for processing. for ex we have apporx 100 parallelism per worker(with
all bolts in a topology) and we had to specify the -Xmx to 4 GB --
internally looked like the process was having abour 3.5-4K threads.
Hello,
I would like to understand how the transfer of tuples emitted by the
executors move to the workers transfer queue. From what i understand
topology.executor.send.buffer.size :1024
is the number of tuples, once in the executors send queue, will be
transferred to the workers send queue as a
Hello,
I have a kafka topic with keyed messages being send by a producer in it. I
want to write topology where i get the key as well as the value in the
storm topology, I have configured the kafka spount to use the following
scheme
*new KeyValueSchemeAsMultiScheme(new StringKeyValueScheme())*
>From what i have found dont remove/add appenders in cluster.xml. not sure
about why it works for a few hours and then stops. any specific way you are
registering log metrics, is it possible to put some code out ?
On Tue, Sep 29, 2015 at 9:14 PM, Stepan Urban
wrote:
> Hi,
> I am using LoggingMet
Hello,
I have added a custom appender to /opt/storm/logback/cluster.xml
**
*${storm.log.dir}/anishek.log*
**
*
${storm.log.dir}/anishek.log.%i*
*1*
*100*
**
**
*100MB*
**
**
*%d{-MM-dd'T'HH:
just to understand,
you have a producer in python sending messages to kafka ? if yes i think
each of the values "test", "val1", "val2" form a separate message and hence
would come in to storm as a separate tupple. If you want to send them as a
single value may be send in a array :
producer.send_
Hello,
The way storm is behaving when i am using isolation scheduler seems a
strange to me. I have 6 machines and i have configured two topologies( a,
b) in storm.yaml to use 2 machines each so 2 machines will be used by all
the other topologies.
i have "a" already running and along with it anot
as you mentioned if processing of a single tuple at 5th bolt itself is
going to take long increasing the parallelism wont help, unless you can
break the operation the 5th bolt does for a tuple such that you export
small files in each operation, then you can do one additional bolt between
4th and 5
Hello,
>From http://stackoverflow.com/questions/24413088/storm-max-spout-pending
looks like max spout pending is the maximum number messages that are within
the topology at any point in time.
From
http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/
the variou
12 matches
Mail list logo