what is you configure of Jedis pool?
my guess is maybe there are more than one Jedis pool in your topology
在 2014年7月22日,下午1:22,이승진 写道:
>
> java.lang.RuntimeException: redis.clients.jedis.exceptions.JedisException:
> Could not return the resource to the pool
> at
> backtype.storm.utils.Di
uld I implement this List as thread safe or not ?
2014-07-31
唐思成
g
> on the same machine with access to the same deployment folder.
> If not, make sure you have deleted all traces from both machines.
>
> Itai
>
> From: 唐思成
> Sent: Friday, July 18, 2014 9:41 AM
> To: user
> Subject: Re: Re: storm upgrade issue
>
> pret
pretty sure that I run the same version on worknode as I do on the maternode
2014-07-18
唐思成
发件人: Harsha
发送时间: 2014-07-18 13:21:08
收件人: user
抄送:
主题: Re: storm upgrade issue
Does your worker node also have the same storm version installed make sure your
older STORM_HOME is not in
hink very throughly, I
wannt to put this implementation on production, so there are still lot work to
be done, any suggestiong and ideas are truly welcome.
2014-07-18
唐思成
发件人: Sam Goodwin
发送时间: 2014-07-17 05:45:03
收件人: user@storm.incubator.apache.org
抄送:
主题: Re: How to implememt d
the step i took listed below
1. kill -9 all storm process
2. remove storm directory on zookeeper
3. change storm local dir
4. start nimbus and ui (is fine)
5. start supervisor on a worknode( the nimbus goes down)
2014-07-18
唐思成
发件人: Itai Frenkel
发送时间: 2014-07-18 00:16:21
收件人
Hi all:
I try to upgrade storm from 0.9.1 to 0.9.2-incubating, and when the worknode
supervisor startup, the nimbus process goes down, here is what the nimbus.log
say:
Before upgrade, I already change storm.local.dir: to a new location and remove
storm node in zookeeper using zkCli.sh, however
唐思成
发件人: Josh J
发送时间: 2014-07-16 21:41:02
收件人: user
抄送:
主题: Scaling Storm Trident by add additional nodes (processes)
Hi,
I have read over the docs here and this stackoverflow answer. Though I'm still
not clear how to scale by adding additional physical machines and processes.
thx, I will try
2014-07-16
唐思成
发件人: David DIDIER
发送时间: 2014-07-16 17:15:32
收件人: user
抄送:
主题: Re: how to run a trident topology with a local drpc
I've had the same problem. Here's how I solved it:
ILocalDRPC drpcServer = new LocalDRPC();
TridentTopology topo
https://gist.github.com/mrflip/5958028#provisionings
Max-pending (TOPOLOGY_MAX_SPOUT_PENDING) sets the number of tuple trees live in
the system at any one time.
maybe this is useful for you
2014-07-15
唐思成
发件人: Raphael Hsieh
发送时间: 2014-07-15 05:54:28
收件人: user
抄送:
主题: Re: Max
= 0; i < t.size(); i++) {
sb.append(t.getString(i));
}
return sb.toString();
}
}
However, the HashSet is stored in memory, when the data grows to a very large
level, I think it will cause a OOM.
So is there a scalable solution?
2014-07-14
唐思成
UI has metric called latency means how long a bolt take to process a tuple
在 2014年7月13日,下午5:49,Vladi Feigin 写道:
> Hi All,
>
> What's the recommended way to measure the avg. time of the tuple spending in
> the topology until its full processing?
> We use Storm version 0.8.2 and have the topolog
I tired to build a trident topology as the official trident tutorial told
http://storm.incubator.apache.org/documentation/Trident-tutorial.html
the code is simple, but I don’t have a cluster so I wanna run this topology
with local cluster with a local drpc, but I don’t know how, any idea?
my
I think is the zero mq has cosumed so many memory
2014-07-10
唐思成
发件人: Vladi Feigin
发送时间: 2014-07-09 19:36:33
收件人: user
抄送:
主题: Storm topology consumes 100% of memory
Hi,
Our topology consumes almost 100% on the physical machines where it runs.
We have heavy load ( 5K events per
Which version did you upgrade from?
I try to upgrade 0.9.1 to 0.9.2 , and I encounter the same problem.
-- 原始邮件 --
发件人: "이승진";;
发送时间: 2014年7月1日(星期二) 下午2:10
收件人: "user";
主题: RE: storm 0.9.2-incubating, nimbus is not launching.
Finally found the reason why.
Q
kill -9 pid
2014-07-08
唐思成
发件人: jeff saremi
发送时间: 2014-07-08 10:43:25
收件人: user@storm.incubator.apache.org
抄送:
主题: How to bring a node offline?
What is the sequence or command to make a node unavailable say for maintenance?
This is one of the supervisor nodes of course.
thanks
$mk_timer$fn__2112.invoke(timer.clj:41)
[na:0.9.1-incubating]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:619) [na:na]
2014-07-07 19:38:45 b.s.util [INFO] Halting process: ("Error when processing an
event")
2014-07-08
唐思成
, you lose a worknode in your
cluster. It is always a good practice to run nimbus and supervisor under
monitor application.
2014-07-07
唐思成
发件人: jeff saremi
发送时间: 2014-07-06 23:20:38
收件人: d...@storm.incubator.apache.org; user@storm.incubator.apache.org
抄送:
主题: The role of supervisor in
les
>
> http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/
>
> http://www.michael-noll.com/blog/2012/10/16/understanding-the-parallelism-of-a-storm-topology/
>
>
>
>
>
> Il 2014-07-05 12:12 唐思成 ha scritto:
>
>> e
everyone, do you have some best practice or pattern for tunning the storm
cluster, like how to set the number of workers, the numbers of spout and bolts,
etc..
Any advice is welcome.
2014-07-05
唐思成
I think you should change the scope of storm-kafka, the storm server only
provides the storm-core.
org.apache.storm
storm-kafka
0.9.2-incubating
compile
2014-07-01
唐思成
发件人: Josh J
发送时间: 2014-07-01 16:06:26
收件人: user
抄送:
主题: java.lang.ClassNotFoundException
Thanks for advising. But what I really wants is to find why the supvisor is
down, so I think the log of storm may have something I can use to track down
this problem.
2014-06-30
唐思成
发件人: 唐思成
发送时间: 2014-06-30 12:13:46
收件人: user
抄送:
主题: Re: Re: Is there a way to log the unexcepted
Thanks for advising. But what I really wants is to find why the supvisor is
down.
2014-06-30
唐思成
发件人: Deepak Sharma
发送时间: 2014-06-30 11:47:21
收件人: user
抄送:
主题: Re: Is there a way to log the unexcepted shutdown of a worker node?
You can write a cronjob to check the status of your
Hi,everyone:
I got a cluster with one master node and two work nodes. I have a topology run
on this cluster for quite a while then a work node shut down sliently and it
never restart. I looked up the nimbus.log, supervisor.log but found nothing
usefull. Anybody has an idea?
2014-06-30
唐思成
to the nimbus. But my problem here is I have the topology run for hours
and this exception happened, so I think the connection may not be the reason.
Any one know why?
唐思成
25 matches
Mail list logo