Have you tried to use option 2 with a virtual environment ?
I think that should allow you to use python C extensions, although your
cluster will need to have the same architecture and python versions...
On 17 December 2015 at 15:06, gzc <18810513...@126.com> wrote:
> I write spouts and bolts mai
Hi,
Nimbus thinks that the worker where that bolt/spout is stalled or died and
kills it, rebalancing the cluster to recover from it.
We are having exactly that behavior on a spout I implemented that is
affected by the size of the data it handles and I'm now rewriting it so it
doesn't depend on ou
ow. I
>> have not used storm multi-lang though to be honest.
>>
>> Regards.
>>
>> On Fri, May 29, 2015 at 2:33 PM, Carlos Perelló Marín <
>> car...@serverdensity.com> wrote:
>>
>>> Found the problem... I'm not serializing the json object so
f various
>> lengths around 75KB.
>>
>> Thank you for your time!
>>
>> +
>> Jeff Maass
>> linkedin.com/in/jeffmaass
>> stackoverflow.com/users/373418/maassql
>> +
>>
>>
>> On Thu, May 28, 2015 at 2
Hi,
While working with Apache Storm 0.9.4 with python + multilang, I found that
one tuple was hanging the topology. It took me a while to figure what's
going on and why it stopped processing payloads until I found that the
hanged bolt was blocked waiting from input on its stdin (it hangs calling
e
If the data is not too big, I guess you may use Zookeeper, it's already
there and it's supposed to allow exactly the use case you want to cover...
On 21 May 2015 at 19:13, applyhhj wrote:
> Very clear answer, I guest I need to find other ways. Thank you very
> much!!
>
> 2015-05-21
> --
ked messages is
> greater than the bolt's one.
>
> I need to study about Storm more but I am wondering what I am doing wrong
> here. I am using KafkaSpout.
>
> On Fri, Apr 17, 2015 at 2:22 AM, Carlos Perelló Marín <
> car...@serverdensity.com> wrote:
>
&
Hi Jae,
I think the setting you want is config.TOPOLOGY_MAX_SPOUT_PENDING. Once you
set that config setting to a value, that value will be the number of
messages emitted by your spout that are still being processed in your
topology.
So if you set that to 100 in your previous scenario last bolt in
Hi,
I'm having some problems with an storm cluster that produces that nimbus
rebalance the topology too often because thinks that some workers are down.
My setup is using apache-storm 0.9.3:
2 storm nodes running storm-supervisor (storm-1 and storm-2)
1 server running storm-nimbus and storm-ui
a