Re: Storm trident, multiple workers nothing happens

2014-06-18 Thread Michael Rose
In a single worker, you don't incur serialization or network overhead.

Michael Rose (@Xorlev https://twitter.com/xorlev)
Senior Platform Engineer, FullContact http://www.fullcontact.com/
mich...@fullcontact.com


On Tue, Jun 17, 2014 at 11:09 PM, Romain Leroux leroux@gmail.com
wrote:

 Still netty performances were clearly better for me (when using only 1
 worker since multiple workers didn't work with netty)


 2014-06-18 1:33 GMT+09:00 Danijel Schiavuzzi dani...@schiavuzzi.com:

 I had a similar problem, with Netty my Trident transactional topology was
 getting stuck after several occurences of Kafka spout restarting due to
 Kafka SocketTimeouts (I can reproduce this bug by blocking access to Kafka
 from the Supervisor machines with iptables, only a few tries are needed to
 reproduce it). Reverted to ZMQ and now it works flawlessly.

 I'll prepare a reproducible test case and fill a JIRA bug report ASAP.
  On Jun 17, 2014 3:51 PM, Romain Leroux leroux@gmail.com wrote:

 As I read in different topics, here also simply switching back to ZeroMQ
 solved the issue ...


 2014-06-13 21:58 GMT+09:00 Romain Leroux leroux@gmail.com:

 After tuning a trident topology (kafka-storm-cassandra) to run on 1
 worker (so on 1 server), it works really well.

 I tried to deploy it using 2 workers on 1 server or 2 workers on 2
 servers.
 The result is the same, nothing happens, no tuples are emitted and no
 messages in the logs.

 A quick profiling showed me that :

 77% of CPU time is main-SendThread(a.zookeeper.hostname:2181)
 org.apache.zookeeper.ClientCnx$sendThreadrun()
 sun.nio.ch.SelectorImpl.select()

 The rest mainly come from 2 threads New I/O
 org.jboss.netty.channel.socket.nio.SelectorUtil.select()
 sun.nio.ch.SelectorImpl.select()

 Therefore I am wondering if the problem can come from one of the
 followings :

 - Zookeeper cluster version is 3.4.6, which is different from the 3.3.x
 used by Storm 0.9.1-incubating ?
 But that is strange because there are absolutely no problem when using
 the same settings but with only 1 worker

 - Communication layer is netty, which can be not working well with my
 hardware ? (is this possible?)
 In case of 1 worker only netty seems not to be too much involved (no
 inter worker communication)
 Maybe changing to ZeroMQ ?

 Has someone faced similar issue ? Any pointer ? Or anything in
 particular to monitor / profile ?






Not able get all the messages from zookeeper

2014-06-18 Thread M.Tarkeshwar Rao
Hi all,

We are facing a issue in reading the data from zookeeper. Actually whenever
there is a failure in processing
on bolt, storm writes it on zookeeper.(by raising reportedfailedexception)
we are using trident with some code changes in trident code base.

We have 7 tuple(filenames) batch. we are processing the data of a file in
bolts.

We have written the read api for zookeeper which parsing the znodes data
and return the list of failures.

In some cases we are getting 0 number failures even there were failures in
the processing.

We tested our read api extensively. No issues in that.

There are no issues in local cluster but facing problem on multi node
cluster.

Any inputs ?

Regards
Tarkeshwar


Can I not use tuple timeout?

2014-06-18 Thread 이승진
Dear all,
 
AFAIK, spout 'assumes' processing a tuple is failed when it does not accept ack 
in timeout interval.
 
But I found that even if spout's fail method is called, that tuple is still 
running through topology i.e other bolts are still processing that tuple.
 
So actually failed tuple is not failed but just delayed.
 
I think I can configure timeout value bigger, but I want to know if there's a 
way to avoid using spout's timeout 
 
Sincerly
 






Re: Can I not use tuple timeout?

2014-06-18 Thread jamesweb3
Hi,
You are correct. You can set time-out longer by setting config. However, if you 
don't want to use ack feature, you can set number of acker to 0. By this way, 
your topology is running on an unreliable way. In other words, all of your 
tuple will not be tracked.

Best regards,
James Fu


 이승진 sweetest...@navercorp.com 於 2014/6/18 下午5:29 寫道:
 
 Dear all,
 
  
 
 AFAIK, spout 'assumes' processing a tuple is failed when it does not accept 
 ack in timeout interval.
 
  
 
 But I found that even if spout's fail method is called, that tuple is still 
 running through topology i.e other bolts are still processing that tuple.
 
  
 
 So actually failed tuple is not failed but just delayed.
 
  
 
 I think I can configure timeout value bigger, but I want to know if there's a 
 way to avoid using spout's timeout 
 
  
 
 Sincerly
 
  
 
 


Re: How to update a running Storm Topology

2014-06-18 Thread Nathan Leung
If you kill a topology in the ui, you will notice that sometimes it takes
awhile for it to clear and go away. If you try to reload the topology
during this time you will get the same exception. You should loop checking
the nimbus for this topology after you kill it, and only reload after you
detect that it has gone away.
On Jun 18, 2014 3:25 AM, Nishu nishuta...@gmail.com wrote:

 Hi,

 I need to update a running topology on some user action.
 So I have used following way in the topology : Killing the running
 topology and again starting it

NimbusClient client =
 NimbusClient.getConfiguredClient(conf);
  try {
 ClusterSummary summary = client.getClient().getClusterInfo();
 for (TopologySummary s : summary.get_topologies()) {
  if (s.get_name().equals(name)) {
client.killTopology(name);
 return true;
  }
 }
 return false;
 }
StormSubmitter.submitTopology(topologyName,
 conf,builder.createTopology());



 What happens here is : Topology is killed by the client, but when
 Stormsubmitter submits the same topology, it gets exception for duplicate
 topology name like below:

 Exception in thread main java.lang.RuntimeException: Topology with name
 `test_topology` already exists on cluster
 at
 backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:89)
 at
 backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:58)
 at com.cts.TestTopology.main(TestTopology.java:131)



 What should I use to overcome this problem? or is there another way to
 update the running topology?

 Thanks,
 Nishu Tayal



Re: what does each field of storm UI mean?

2014-06-18 Thread Derek Dagit

Adrian,

If you hover over the title of the field, there should appear a pop-up to 
explain what it means.

--
Derek

On 6/17/14, 21:05, 이승진 wrote:

Dear storm users

I want to see performance of each bolt and decide the number of parallelism.
In storm UI there are several fields which is confusing, so would be glad if 
you can tell me.
Capacity(last 10m) - average capacity per one second in last 10 minute of a 
single executor? For example, if Capcity is 1.2, does that mean single executor 
processed 1.2 messages per second in average?
Execute latency and Process latency - Is it average value or value of last 
processed message? and what is the difference between them? and what is the 
difference between them and Capacity?

Sincerly,
Adrian SJ Lee





scaling trident spout

2014-06-18 Thread aka.fe2s
I'm implementing my own ITridentSpout and would like to make it scalable.
When I set parallelism hint to 3 with pipelining for my spout, I see the
following

[Thread-16-$spoutcoord-spout0] MyEmitter - initializeTransaction() txid 1
prevMetadata null currMetadata null generated metadata 9002537a
[Thread-26-spout0] MyEmitter - emitBatch() txId 1 attempt id 0
coordinatorMeta 9002537a this.hashCode() 45c8d2d9 emitted e5f7a9d3
[Thread-30-spout0] MyEmitter - emitBatch txId 1 attempt id 0
coordinatorMeta 9002537a this.hashCode() 38ac85a emitted 1f08bdab
[Thread-28-spout0] MyEmitter - emitBatch txId 1 attempt id 0
coordinatorMeta 9002537a this.hashCode() 58ed567b emitted ee35006c

It creates 3 emitter instances and 3 threads as expected. But coordinator
propagates the same tx id 1 to _all_ emitters, this doesn't make much sense
since we should emit tuples for given tx id only once. I was expecting to
get the following

[Thread-16-$spoutcoord-spout0] MyEmitter - initializeTransaction() txid 1 ..
[Thread-26-spout0] MyEmitter - emitBatch() txId 1 ...
[Thread-16-$spoutcoord-spout0] MyEmitter - initializeTransaction() txid 2 ..
[Thread-30-spout0] MyEmitter - emitBatch txId 2 ...
[Thread-16-$spoutcoord-spout0] MyEmitter - initializeTransaction() txid 3 ..
[Thread-28-spout0] MyEmitter - emitBatch txId 3 ...

Is there any way to achieve this?


Re: using thrift api

2014-06-18 Thread Nathan Leung
A rough overview since I don't know if I can share code

1) create a thrift connection
2) get a Nimbus.Client object (I will call this 'client')
3) call client.getTopology(topology id) - returns StormTopology, I will
call this 'topology'
4) Iterate MapString, SpoutSpec that is returned by
topology.get_spouts(), ignore any whose key starts with __ as these are
system spouts
5) Iterate MapString, Bolt that is returned by topology.get_bolts(),
ignore any whose key starts with __
6) for each Bolt, do bolt.get_common().get_inputs().keySet() which returns
SetGlobalStreamId
8) for each 'input' in SetGlobalStreamId, call input.get_componentId().
 Flag the component (spout or bolt) that is writing to this bolt.
9) At the end, you know that any bolt that is writing to another bolt has
been flagged and is not a leaf bolt.


On Tue, Jun 17, 2014 at 6:31 PM, Babar Ismail baba...@microsoft.com wrote:

  Is there a way to figure out which bolt is at the end of the chain using
 the thrift api?



 For instance, I have this topology:

 TopologyBuilder builder = *new* TopologyBuilder();

 builder.setSpout(Hello, *new* HelloWorldSpout(), 1);

 builder.setBolt(World, *new* HelloWorldBolt(), 1).shuffleGrouping(
 Hello);

 builder.setBolt(World2, *new* HelloWorldBolt(), 1).shuffleGrouping(
 World);



 Is there a way to know that world2 is at the end of the chain?



Re: Trident Binned Aggregation Design Help

2014-06-18 Thread Adam Lewis
Do you have any insight into how DRPC plays into this?  The groupBy bolt
boundary makes perfect sense and I understand how that maps to some
collection of bolts that would process different groupings (depending on
parallelism).  What stumps me are the cases where adding multiple DRPC
spouts to the topology seems to result in the whole things being duplicated
for each spout.  I can see some extra tracking mechanisms get stood up to
track DRPC requests and then match request with response but still not sure
why that wouldn't scale linearly with # of DRPC spouts.




On Tue, Jun 17, 2014 at 8:05 PM, P. Taylor Goetz ptgo...@gmail.com wrote:

 Not at the moment but I will be adding that functionality (trident state)
 to the storm-hbase project very soon. Currently it only supports MapState.

 -Taylor

 On Jun 17, 2014, at 6:09 PM, Andrew Serff and...@serff.net wrote:

 I'm currently using Redis, but I'm by no means tied to it.  Are there any
 example of using either to do that?

 Andrew


 On Tue, Jun 17, 2014 at 3:51 PM, P. Taylor Goetz ptgo...@gmail.com
 wrote:

 Andrew/Adam,

 Partitioning operations like groupBy() form the bolt boundaries in
 trident topologies, so the more you have the more bolts you will have and
 thus, potentially, more network transfer.

 What backing store are you using for persistence? If you are using
 something with counter support like HBase or Cassandra you could leverage
 that in combination with tridents exactly once semantics to let it handle
 the counting, and potentially greatly reduce the complexity of your
 topology.

 -Taylor

 On Jun 17, 2014, at 5:15 PM, Adam Lewis m...@adamlewis.com wrote:

 I, too, am eagerly awaiting a reply from the list on this topic.  I hit
 up against max topology size limits doing something similar with trident.
  There are definitely linear changes to a trident topology that result in
 quadratic growth of the compiled storm topology size, such as adding DRPC
 spouts.  Sadly the compilation process of trident to plain storm remains
 somewhat opaque to me and I haven't had time to dig deeper.  My work around
 has been to limit myself to one DRPC spout per topology and
 programmatically build multiple topologies for the variations (which
 results in a lot of structural and functional duplication of deployed
 topologies, but at least not code duplication).

 Trident presents a seemingly nice abstraction, but from my point of view
 it is a leaky one if I need to understand the compilation process to know
 why adding a single DRPC spout double the topology size.



 On Tue, Jun 17, 2014 at 4:33 PM, Andrew Serff and...@serff.net wrote:

 Is there no one out there that can help with this?  If I use this
 paradigm, my topology ends up having like 170 bolts.  Then I add DRPC
 stream and I have like 50 spouts.  All of this adds up to a topology that I
 can't even submit because it's too large (and i've bumped the trident max
 to 50mb already...).  It seems like I'm thinking about this wrong, but I
 haven't be able to come up with another way to do it.  I don't really see
 how using vanilla Storm would help, maybe someone can offer some guidance?

 Thanks
 Andrew


 On Mon, Jun 9, 2014 at 11:45 AM, Andrew Serff and...@serff.net wrote:

 Hello,

 I'm new to using Trident and had a few questions about the best way to
 do things in this framework.  I'm trying to build a real-time streaming
 aggregation system and Trident seems to have a very easy framework to allow
 me to do that.  I have a basic setup working, but as I am adding more
 counters, the performance becomes very slow and eventually I start having
 many failures.  At the basic level here is what I want to do:

 Have an incoming stream that is using a KafkaSpout to read data from.
 I take the Kafka stream, parse it and output multiple fields.
 I then want many different counters for those fields in the data.

 For example, say it was the twitter stream.  I may want to count:
 - A counter for each username I come across.  So how many times I have
 received a tweet from each user
 - A counter for each hashtag so you know how many tweets mention a
 hashtag
 - Binned counters based on date for each tweet (i.e. how many tweets in
 2014, June 2014, June 08 2014, etc).

 The list could continue, but this can add up to hundreds of counters
 running in real time.  Right now I have something like the following:

 TridentTopology topology = new TridentTopology();
 KafkaSpout spout = new KafkaSpout(kafkaConfig);
 Stream stream = topology.newStream(messages, spout).shuffle()
 .each(new Fields(str), new FieldEmitter(), new
 Fields(username, hashtag));

 stream.groupBy(new Fields(username))
 .persistentAggregate(stateFactory, new
 Fields(username), new Count(), new Fields(count))
 .parallelismHint(6);
 stream.groupBy(new Fields(hashtag))
 .persistentAggregate(stateFactory, new
 Fields(hashtag), new Count(), new Fields(count))
 

v0.9.2-incubating and .ser files

2014-06-18 Thread Andrew Montalenti
I built the v0.9.2-incubating rc-3 locally and once verifying that it
worked for our topology, pushed it into our cluster. So far, so good.

One thing for the community to be aware of. If you try to upgrade an
existing v0.9.1-incubating or 0.8 cluster to v0.9.2-incubating, you may hit
exceptions upon nimbus/supervisor startup about stormcode.ser/stormconf.ser.

The issue is that the new cluster will try to re-submit the topologies that
were already running before the upgrade. These will fail because Storm's
Clojure version has been upgraded from 1.4 - 1.5, thus the serialization
formats  IDs have changed. This would be true basically if any class
serial IDs change that happen to be in these .ser files (stormconf.ser 
stormcode.ser, as defined in Storm's internal config
https://github.com/apache/incubator-storm/blob/master/storm-core/src/clj/backtype/storm/config.clj#L143-L153
).

The solution is to clear out the storm data directories on your worker
nodes/nimbus nodes and restart the cluster.

I have some open source tooling that submits topologies to the nimbus using
StormSubmitter. This upgrade also made me realize that due to the use
of serialized
Java files
https://github.com/apache/incubator-storm/blob/master/storm-core/src/jvm/backtype/storm/utils/Utils.java#L73-L97,
it is very important the StormSubmitter class used for submitting and the
running Storm cluster be precisely the same version / classpath. I describe
this more in the GH issue here:

https://github.com/Parsely/streamparse/issues/27

I wonder if maybe it's worth it to consider using a less finicky
serialization format within Storm itself. Would that change be welcome as a
pull request?

It would make it easier to script Storm clusters without consideration for
client/server Storm version mismatches, which I presume was the original
reasoning behind putting Storm functionality behind a Thrift API anyway.
And it would prevent crashed topologies during minor Storm version upgrades.


error building storm on mac

2014-06-18 Thread Sa Li
Dear all

I try to install storm on mac vy following such link
http://ptgoetz.github.io/blog/2013/11/26/building-storm-on-osx-mavericks/

but having such error 
lein sub install
Reading project from storm-console-logging
Created 
/workspace/tools/storm/storm-console-logging/target/storm-console-logging-0.9.1-incubating-SNAPSHOT.jar
Wrote /workspace/tools/storm/storm-console-logging/pom.xml
Installed jar and pom into local repo.
Reading project from storm-core
java.lang.Exception: Error loading storm-core/project.clj
 at leiningen.core.project$read$fn__4553.invoke (project.clj:827)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke (protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke (protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
clojure.lang.RestFn.applyTo (RestFn.java:139)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$apply_task.invoke (main.clj:281)
leiningen.core.main$resolve_and_apply.invoke (main.clj:287)
leiningen.core.main$_main$fn__4295.invoke (main.clj:357)
leiningen.core.main$_main.doInvoke (main.clj:344)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:624)
clojure.main$main_opt.invoke (main.clj:315)
clojure.main$main.doInvoke (main.clj:420)
clojure.lang.RestFn.invoke (RestFn.java:457)
clojure.lang.Var.invoke (Var.java:394)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.Var.applyTo (Var.java:700)
clojure.main.main (main.java:37)
Caused by: clojure.lang.Compiler$CompilerException: 
java.lang.IllegalArgumentException: Duplicate keys: :javac-options, 
compiling:(/workspace/tools/storm/storm-core/project.clj:17:62)
 at clojure.lang.Compiler.load (Compiler.java:7142)
clojure.lang.Compiler.loadFile (Compiler.java:7086)
clojure.lang.RT$3.invoke (RT.java:318)
leiningen.core.project$read$fn__4553.invoke (project.clj:825)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke (protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke (protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
clojure.lang.RestFn.applyTo (RestFn.java:139)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$apply_task.invoke (main.clj:281)
leiningen.core.main$resolve_and_apply.invoke (main.clj:287)
leiningen.core.main$_main$fn__4295.invoke (main.clj:357)
leiningen.core.main$_main.doInvoke (main.clj:344)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.Var.invoke (Var.java:383)

Re: error building storm on mac

2014-06-18 Thread Harsha
Alec,

  That link talks about older version of storm. You can get the
latest code from here [1]github.com/apache/incubator-storm. Storm
switched maven for building , you can run mvn clean package under
latest storm dir to build .

-Harsha.





On Wed, Jun 18, 2014, at 03:13 PM, Sa Li wrote:

Dear all



I try to install storm on mac vy following such link

[2]http://ptgoetz.github.io/blog/2013/11/26/building-storm-on-osx-maver
icks/



but having such error

lein sub install
Reading project from storm-console-logging
Created
/workspace/tools/storm/storm-console-logging/target/storm-console-loggi
ng-0.9.1-incubating-SNAPSHOT.jar
Wrote /workspace/tools/storm/storm-console-logging/pom.xml
Installed jar and pom into local repo.
Reading project from storm-core
java.lang.Exception: Error loading storm-core/project.clj
 at leiningen.core.project$read$fn__4553.invoke (project.clj:827)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke
(protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke
(protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
clojure.lang.RestFn.applyTo (RestFn.java:139)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$apply_task.invoke (main.clj:281)
leiningen.core.main$resolve_and_apply.invoke (main.clj:287)
leiningen.core.main$_main$fn__4295.invoke (main.clj:357)
leiningen.core.main$_main.doInvoke (main.clj:344)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:624)
clojure.main$main_opt.invoke (main.clj:315)
clojure.main$main.doInvoke (main.clj:420)
clojure.lang.RestFn.invoke (RestFn.java:457)
clojure.lang.Var.invoke (Var.java:394)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.Var.applyTo (Var.java:700)
clojure.main.main (main.java:37)
Caused by: clojure.lang.Compiler$CompilerException:
java.lang.IllegalArgumentException: Duplicate keys: :javac-options,
compiling:(/workspace/tools/storm/storm-core/project.clj:17:62)
 at clojure.lang.Compiler.load (Compiler.java:7142)
clojure.lang.Compiler.loadFile (Compiler.java:7086)
clojure.lang.RT$3.invoke (RT.java:318)
leiningen.core.project$read$fn__4553.invoke (project.clj:825)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke
(protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke
(protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
clojure.lang.RestFn.applyTo (RestFn.java:139)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invoke (core.clj:626)

Re: error building storm on mac

2014-06-18 Thread Sa Li
Thanks, Harsha, I assume I could download the release version on my mac,
say storm-0.9.0.1 which contains the jars in root directory, therefore I do
not have to build, is it correct?

cheers

Alec
On Jun 18, 2014 4:32 PM, Harsha st...@harsha.io wrote:

  Alec,
   That link talks about older version of storm. You can get the latest
 code from here github.com/apache/incubator-storm. Storm switched maven
 for building , you can run mvn clean package under latest storm dir to
 build .
 -Harsha.


 On Wed, Jun 18, 2014, at 03:13 PM, Sa Li wrote:

 Dear all

 I try to install storm on mac vy following such link
 http://ptgoetz.github.io/blog/2013/11/26/building-storm-on-osx-mavericks/

 but having such error
 lein sub install
 Reading project from storm-console-logging
 Created
 /workspace/tools/storm/storm-console-logging/target/storm-console-logging-0.9.1-incubating-SNAPSHOT.jar
 Wrote /workspace/tools/storm/storm-console-logging/pom.xml
 Installed jar and pom into local repo.
 Reading project from storm-core
 java.lang.Exception: Error loading storm-core/project.clj
  at leiningen.core.project$read$fn__4553.invoke (project.clj:827)
 leiningen.core.project$read.invoke (project.clj:824)
 leiningen.core.project$read.invoke (project.clj:834)
 leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
 leiningen.sub$run_subproject.invoke (sub.clj:15)
 clojure.lang.AFn.applyToHelper (AFn.java:165)
 clojure.lang.AFn.applyTo (AFn.java:144)
 clojure.core$apply.invoke (core.clj:628)
 clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
 clojure.lang.RestFn.invoke (RestFn.java:421)
 clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
 clojure.core.protocols/fn (protocols.clj:98)
 clojure.core.protocols$fn__6057$G__6052__6066.invoke (protocols.clj:19)
 clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
 clojure.core.protocols/fn (protocols.clj:60)
 clojure.core.protocols$fn__6031$G__6026__6044.invoke (protocols.clj:13)
 clojure.core$reduce.invoke (core.clj:6289)
 leiningen.sub$sub.doInvoke (sub.clj:25)
 clojure.lang.RestFn.invoke (RestFn.java:425)
 clojure.lang.Var.invoke (Var.java:383)
 clojure.lang.AFn.applyToHelper (AFn.java:156)
 clojure.lang.Var.applyTo (Var.java:700)
 clojure.core$apply.invoke (core.clj:626)
 leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
 clojure.lang.RestFn.applyTo (RestFn.java:139)
 clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
 clojure.lang.RestFn.applyTo (RestFn.java:137)
 clojure.core$apply.invoke (core.clj:626)
 leiningen.core.main$apply_task.invoke (main.clj:281)
 leiningen.core.main$resolve_and_apply.invoke (main.clj:287)
 leiningen.core.main$_main$fn__4295.invoke (main.clj:357)
 leiningen.core.main$_main.doInvoke (main.clj:344)
 clojure.lang.RestFn.invoke (RestFn.java:421)
 clojure.lang.Var.invoke (Var.java:383)
 clojure.lang.AFn.applyToHelper (AFn.java:156)
 clojure.lang.Var.applyTo (Var.java:700)
 clojure.core$apply.invoke (core.clj:624)
 clojure.main$main_opt.invoke (main.clj:315)
 clojure.main$main.doInvoke (main.clj:420)
 clojure.lang.RestFn.invoke (RestFn.java:457)
 clojure.lang.Var.invoke (Var.java:394)
 clojure.lang.AFn.applyToHelper (AFn.java:165)
 clojure.lang.Var.applyTo (Var.java:700)
 clojure.main.main (main.java:37)
 Caused by: clojure.lang.Compiler$CompilerException:
 java.lang.IllegalArgumentException: Duplicate keys: :javac-options,
 compiling:(/workspace/tools/storm/storm-core/project.clj:17:62)
  at clojure.lang.Compiler.load (Compiler.java:7142)
 clojure.lang.Compiler.loadFile (Compiler.java:7086)
 clojure.lang.RT$3.invoke (RT.java:318)
 leiningen.core.project$read$fn__4553.invoke (project.clj:825)
 leiningen.core.project$read.invoke (project.clj:824)
 leiningen.core.project$read.invoke (project.clj:834)
 leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
 leiningen.sub$run_subproject.invoke (sub.clj:15)
 clojure.lang.AFn.applyToHelper (AFn.java:165)
 clojure.lang.AFn.applyTo (AFn.java:144)
 clojure.core$apply.invoke (core.clj:628)
 clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
 clojure.lang.RestFn.invoke (RestFn.java:421)
 clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
 clojure.core.protocols/fn (protocols.clj:98)
 clojure.core.protocols$fn__6057$G__6052__6066.invoke (protocols.clj:19)
 clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
 clojure.core.protocols/fn (protocols.clj:60)
 clojure.core.protocols$fn__6031$G__6026__6044.invoke (protocols.clj:13)
 clojure.core$reduce.invoke (core.clj:6289)
 leiningen.sub$sub.doInvoke (sub.clj:25)
 clojure.lang.RestFn.invoke (RestFn.java:425)
 clojure.lang.Var.invoke (Var.java:383)
 clojure.lang.AFn.applyToHelper (AFn.java:156)
 clojure.lang.Var.applyTo (Var.java:700)
 

Re: error building storm on mac

2014-06-18 Thread Harsha
Yes. you can grab the release packages and install.





On Wed, Jun 18, 2014, at 04:38 PM, Sa Li wrote:

  Thanks, Harsha, I assume I could download the release version on my
  mac, say storm-0.9.0.1 which contains the jars in root directory,
  therefore I do not have to build, is it correct?

  cheers

  Alec

On Jun 18, 2014 4:32 PM, Harsha [1]st...@harsha.io wrote:

Alec,
  That link talks about older version of storm. You can get the
latest code from here [2]github.com/apache/incubator-storm. Storm
switched maven for building , you can run mvn clean package under
latest storm dir to build .
-Harsha.


On Wed, Jun 18, 2014, at 03:13 PM, Sa Li wrote:

Dear all



I try to install storm on mac vy following such link

[3]http://ptgoetz.github.io/blog/2013/11/26/building-storm-on-osx-maver
icks/



but having such error

lein sub install
Reading project from storm-console-logging
Created
/workspace/tools/storm/storm-console-logging/target/storm-console-loggi
ng-0.9.1-incubating-SNAPSHOT.jar
Wrote /workspace/tools/storm/storm-console-logging/pom.xml
Installed jar and pom into local repo.
Reading project from storm-core
java.lang.Exception: Error loading storm-core/project.clj
 at leiningen.core.project$read$fn__4553.invoke (project.clj:827)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke
(protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke
(protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
clojure.lang.RestFn.applyTo (RestFn.java:139)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$apply_task.invoke (main.clj:281)
leiningen.core.main$resolve_and_apply.invoke (main.clj:287)
leiningen.core.main$_main$fn__4295.invoke (main.clj:357)
leiningen.core.main$_main.doInvoke (main.clj:344)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:624)
clojure.main$main_opt.invoke (main.clj:315)
clojure.main$main.doInvoke (main.clj:420)
clojure.lang.RestFn.invoke (RestFn.java:457)
clojure.lang.Var.invoke (Var.java:394)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.Var.applyTo (Var.java:700)
clojure.main.main (main.java:37)
Caused by: clojure.lang.Compiler$CompilerException:
java.lang.IllegalArgumentException: Duplicate keys: :javac-options,
compiling:(/workspace/tools/storm/storm-core/project.clj:17:62)
 at clojure.lang.Compiler.load (Compiler.java:7142)
clojure.lang.Compiler.loadFile (Compiler.java:7086)
clojure.lang.RT$3.invoke (RT.java:318)
leiningen.core.project$read$fn__4553.invoke (project.clj:825)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke
(protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke
(protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo 

getting tuple by messageId

2014-06-18 Thread 이승진
Hi all,
I'm using BaseRichSpout and to my knowledge, I have to implement the replay 
logic to guarantee message processing.
When I see the fail method of BaseRichSpout,public void fail(java.lang.Object 
msgId)it just gives me messageId.and of course I set collector in open method. 
so I think if I make fail method to do collector.emit(failed tuple), I can 
guarantee exactly-once message processing.Is there any way to get a tuple in a 
tuple tree by its Id? or do I have to fetch again from some source?
thanks