Re: Cassandra bolt

2014-09-25 Thread Harsha
did you tried [1]https://github.com/ptgoetz/storm-cassandra.





On Thu, Sep 25, 2014, at 11:20 AM, Strulovitch, Zack wrote:

I've updated to 0.9.2 from pre-apache version 0.9.0.1 (which
broke my Cassandra bolt implemented using this code :
[2]https://github.com/tjake/stormscraper[3]https://github.com/t
jake/stormscraper )
According to some posts, this is due to netty conflict. Could
anyone please suggest me an alternative reliable Cassandra bolt
implementation?
Thank you in advance,
Zack


  __

This e-mail contains privileged and confidential information
intended for the use of the addressees named above. If you are
not the intended recipient of this e-mail, you are hereby
notified that you must not disseminate, copy or take any action
in respect of any information contained in it. If you have
received this e-mail in error, please notify the sender
immediately by e-mail and immediately destroy this e-mail and
its attachments.

References

1. https://github.com/ptgoetz/storm-cassandra
2. https://github.com/tjake/stormscraper
3. https://github.com/tjake/stormscraper


Re: metrics consumer logging stormUI data

2014-09-22 Thread Harsha
Here is what I see in the metrics.log

2014-09-22 09:44:31,321 731751411404271
localhost:6703 19:split   __transfer-count
{default=2680}

2014-09-22 09:44:31,321 731751411404271
localhost:6703 19:split   __execute-latency
{spout:default=0.0}

2014-09-22 09:44:31,321 731751411404271
localhost:6703 19:split   __fail-count{}

2014-09-22 09:44:31,321 731751411404271
localhost:6703 19:split   __emit-count
{default=2680}

2014-09-22 09:44:31,321 731751411404271
localhost:6703 19:split   __execute-count
{spout:default=420}

2014-09-22 09:44:31,352 731791411404271
localhost:6703 22:split   __ack-count
{spout:default=420}

2014-09-22 09:44:31,352 731791411404271
localhost:6703 22:split   __sendqueue
{write_pos=2679, capacity=1024, read_pos=2679, population=0}

I do see all the UI related counts coming in the metrics.log.



-Harsha





On Mon, Sep 22, 2014, at 10:41 AM, Raphael Hsieh wrote:

Hi Harsha,
Did you have to bind the metrics consumer to the default
StormUI metrics at all? Or do those automagically get included
?

Thanks!

On Mon, Sep 22, 2014 at 10:33 AM, Otis Gospodnetic
<[1]otis.gospodne...@gmail.com> wrote:

Hi Gezim,

On Fri, Sep 19, 2014 at 7:27 PM, Gezim Musliaj
<[2]gmusl...@gmail.com> wrote:

Hey Otis, I was just registered at sematext and I can say that
this is what I have been looking for.I have just one question,
what about the delays between the SPM and the Storm Cluster (if
they do exist), whats the worst case? I mean because these
metrics are not calculated locally, but using an internet
connection.


The worst case is that somebody unplugs your servers from the
network, but if that happens you have bigger problems to deal
with.  In all seriousness, Storm (local) => SPM
(remote/cloud/saas) is not really a problem -- lots of people
successfully use SPM for monitoring Storm, Hadoop, Kafka, and
other types of systems.

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log
Management
Solr & Elasticsearch Support * [3]http://sematext.com/



Thanks !


On Sat, Sep 20, 2014 at 1:15 AM, Otis Gospodnetic
<[4]otis.gospodne...@gmail.com> wrote:

Raphael,

Not sure if this is what you are after, but [5]SPM will collect
and graph all Storm metrics, let you do alerting and anomaly
detection on them, etc.  If you want to graph custom metrics
(e.g. something from your bolts), you can send them in as
[6]custom metrics and again graph them, alert on them, do
anomaly detection on them, stick them on dashboards, etc.  If
you want to emit events from your bolts, you can [7]send events
to SPM, too, or you can send them to [8]Logsene... can be handy
for correlation with alerts and performance graphs when
troubleshooting.  Here are some Storm metrics graph:
[9]http://blog.sematext.com/2014/01/30/announcement-apache-stor
m-monitoring-in-spm/

I hope this helps.

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log
Management
Solr & Elasticsearch Support * [10]http://sematext.com/


On Fri, Sep 19, 2014 at 6:12 PM, Raphael Hsieh
<[11]raffihs...@gmail.com> wrote:

Hi,
Using Storm/Trident, how do I register a metrics consumer to
log the data I get in the StormUI ?
I want to look at historical data of my topology, for example
the execute latency of the topology over time, as this would
give me good insight as to where things might be going wrong
when the system breaks.

I have been following the steps outlined in the BigData
CookBook
here: [12]http://www.bigdata-cookbook.com/post/72320512609/stor
m-metrics-how-to

However I am not wanting to create my own metrics, instead I
just want to log the metrics that already exist built in to
Storm. It is unclear to me how I am supposed to go about doing
that.

Thanks

--
Raphael Hsieh











--
Raphael Hsieh

References

1. mailto:otis.gospodne...@gmail.com
2. mailto:gmusl...@gmail.com
3. http://sematext.com/
4. mailto:otis.gospodne...@gmail.com
5. http://sematext.com/spm/
6. https://sematext.atlassian.net/wiki/display/PUBSPM/Custom+Metrics
7. https://sematext.atlassian.net/wiki/display/PUBSPM/Events+Integration
8. http://www.sematext.com/logsene/
9. 
http://blog.sematext.com/2014/01/30/announcement-apache-storm-monitoring-in-spm/
  10. http://sematext.com/
  11. mailto:raffihs...@gmail.com
  12. http://www.bigdata-cookbook.com/post/72320512609/storm-metrics-how-to


Re: metrics consumer logging stormUI data

2014-09-22 Thread Harsha
Hi Raphael,

I tested it with wordcounttopology under examples.

conf.registerMetricsConsumer(LoggingMetricsConsumer.class,
2);

I do see the metrics added to the logs/metrics.log. metrics.log
should be present by default under storm/logs dir.

-Harsha









On Mon, Sep 22, 2014, at 09:24 AM, Raphael Hsieh wrote:

Thanks Harsha and Otis for your prompt responses.
I'm looking to somehow log these metrics to use for an in-house
monitoring system. I don't want to get user provided metrics
just yet.

>From what I've gathered from the big data cookbook is that I
just want to create a metrics consumer to read these metrics
and print it out to a log file. In order to do this I have
added to my config:

config.registerMetricsConsumer(LoggingMetricsConsumer.class,
2);

which should create a loggingMetricsConsumer with a parallelism
of 2 (I believe). I was lead to believe that these logs would
be put in a file called "metrics.log". However after adding
this to my topology I have been unable to find such a log.
If someone could explain to me what I might be missing that
would be great.

Thanks!

On Fri, Sep 19, 2014 at 4:27 PM, Gezim Musliaj
<[1]gmusl...@gmail.com> wrote:

Hey Otis, I was just registered at sematext and I can say that
this is what I have been looking for.I have just one question,
what about the delays between the SPM and the Storm Cluster (if
they do exist), whats the worst case? I mean because these
metrics are not calculated locally, but using an internet
connection.

Thanks !


On Sat, Sep 20, 2014 at 1:15 AM, Otis Gospodnetic
<[2]otis.gospodne...@gmail.com> wrote:

Raphael,

Not sure if this is what you are after, but [3]SPM will collect
and graph all Storm metrics, let you do alerting and anomaly
detection on them, etc.  If you want to graph custom metrics
(e.g. something from your bolts), you can send them in as
[4]custom metrics and again graph them, alert on them, do
anomaly detection on them, stick them on dashboards, etc.  If
you want to emit events from your bolts, you can [5]send events
to SPM, too, or you can send them to [6]Logsene... can be handy
for correlation with alerts and performance graphs when
troubleshooting.  Here are some Storm metrics graph:
[7]http://blog.sematext.com/2014/01/30/announcement-apache-stor
m-monitoring-in-spm/

I hope this helps.

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log
Management
Solr & Elasticsearch Support * [8]http://sematext.com/


On Fri, Sep 19, 2014 at 6:12 PM, Raphael Hsieh
<[9]raffihs...@gmail.com> wrote:

Hi,
Using Storm/Trident, how do I register a metrics consumer to
log the data I get in the StormUI ?
I want to look at historical data of my topology, for example
the execute latency of the topology over time, as this would
give me good insight as to where things might be going wrong
when the system breaks.

I have been following the steps outlined in the BigData
CookBook
here: [10]http://www.bigdata-cookbook.com/post/72320512609/stor
m-metrics-how-to

However I am not wanting to create my own metrics, instead I
just want to log the metrics that already exist built in to
Storm. It is unclear to me how I am supposed to go about doing
that.

Thanks

--
Raphael Hsieh








--
Raphael Hsieh

References

1. mailto:gmusl...@gmail.com
2. mailto:otis.gospodne...@gmail.com
3. http://sematext.com/spm/
4. https://sematext.atlassian.net/wiki/display/PUBSPM/Custom+Metrics
5. https://sematext.atlassian.net/wiki/display/PUBSPM/Events+Integration
6. http://www.sematext.com/logsene/
7. 
http://blog.sematext.com/2014/01/30/announcement-apache-storm-monitoring-in-spm/
8. http://sematext.com/
9. mailto:raffihs...@gmail.com
  10. http://www.bigdata-cookbook.com/post/72320512609/storm-metrics-how-to


Re: metrics consumer logging stormUI data

2014-09-19 Thread Harsha
you can add the following in storm.yaml to enable
LoggingMetricsConsumer

 topology.metrics.consumer.register:

- class: "backtype.storm.metric.LoggingMetricsConsumer"

  parallelism.hint: 1

storm UI doesn't display user provided metrics and it doesn't
also keep the historical data about the metrics , if the
cluster is restarted topology stats will be reset.

you can find bit more info on this page

[1]http://blog.relateiq.com/monitoring-storm/



-Harsha



On Fri, Sep 19, 2014, at 03:12 PM, Raphael Hsieh wrote:

Hi,
Using Storm/Trident, how do I register a metrics consumer to
log the data I get in the StormUI ?
I want to look at historical data of my topology, for example
the execute latency of the topology over time, as this would
give me good insight as to where things might be going wrong
when the system breaks.

I have been following the steps outlined in the BigData
CookBook
here: [2]http://www.bigdata-cookbook.com/post/72320512609/storm
-metrics-how-to

However I am not wanting to create my own metrics, instead I
just want to log the metrics that already exist built in to
Storm. It is unclear to me how I am supposed to go about doing
that.

Thanks

--
Raphael Hsieh

References

1. http://blog.relateiq.com/monitoring-storm/
2. http://www.bigdata-cookbook.com/post/72320512609/storm-metrics-how-to


Re: Trying to run test Storm App on Windows but getting problems with POM file

2014-09-16 Thread Harsha
Hi ,

 Did you changed storm/pom.xml

org.apache.storm

storm

0.9.3-incubating-SNAPSHOT



and also are your running mvn install from top-level dir not
from storm-starter.

-Harsha



On Tue, Sep 16, 2014, at 03:12 PM, Gezim Musliaj wrote:

I have been following these instructions:

If you are using the latest development version of Storm, e.g.
by having cloned the Storm git repository, then you must first
perform a local build of Storm itself. Otherwise you will run
into Maven errors such as "Could not resolve dependencies for
project org.apache.storm:storm-starter:-SNAPSHOT
".
# Must be run from the top-level directory of the Storm code repository
$ mvn clean install -DskipTests=true

This command will build Storm locally and install its jar files
to your user's $HOME/.m2/repository/. When you run the Maven
command to build and run storm-starter (see below), Maven will
then be able to find the corresponding version of Storm in this
local Maven repository at $HOME/.m2/repository.
>From [1]https://github.com/apache/incubator-storm/tree/master/e
xamples/storm-starter


On Wed, Sep 17, 2014 at 12:02 AM, Nick Beenham
<[2]nick.been...@gmail.com> wrote:

I think you'll need to build and install in your local maven
repo, i dont think 0.9.3 is in maven central.


On Tue, Sep 16, 2014 at 4:47 PM, Gezim Musliaj
<[3]gmusl...@gmail.com> wrote:

[INFO] Scanning for projects...
[INFO]
[INFO]
---
-
[INFO] Building storm-starter 0.9.3-incubating-SNAPSHOT
[INFO]
---
-
[WARNING] The POM for
org.apache.storm:storm-core:jar:0.9.3-incubating is missin
g, no dependency information available
[INFO]
---
-
[INFO] BUILD FAILURE
[INFO]
---
-
[INFO] Total time: 1.193 s
[INFO] Finished at: 2014-09-16T22:39:58+02:00
[INFO] Final Memory: 8M/113M
[INFO]
---
-
[ERROR] Failed to execute goal on project storm-starter: Could
not resolve depen
dencies for project
org.apache.storm:storm-starter:jar:0.9.3-incubating-SNAPSHOT
: Failure to find
org.apache.storm:storm-core:jar:0.9.3-incubating
in [4]http://rep
[5]o1.maven.org/maven2/ was cached in the local repository,
resolution will not be
reattempted until the update interval of central has elapsed or
updates are forc
ed -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven
with the -e swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug
logging.
[ERROR]
[ERROR] For more information about the errors and possible
solutions, please rea
d the following articles:
[ERROR] [Help
1] [6]http://cwiki.apache.org/confluence/display/MAVEN/Dependen
cyReso
lutionException


=
I have tried the solution provided by
" [7]http://mail-archives.apache.org/mod_mbox/storm-user/201404
.mbox/%3CCALFqTqR7HeZ=k2CdrTbq_NTW52YpPOkAsKa_HZrJGF+QRH2pDg@ma
il.gmail.com%3E " by addind the given rows and by changing the
version to 0.9.3 (because in the solution email is 0.9.1).

Thanks in advance!

References

1. https://github.com/apache/incubator-storm/tree/master/examples/storm-starter
2. mailto:nick.been...@gmail.com
3. mailto:gmusl...@gmail.com
4. http://rep/
5. http://o1.maven.org/maven2/
6. http://cwiki.apache.org/confluence/display/MAVEN/DependencyReso
7. 
http://mail-archives.apache.org/mod_mbox/storm-user/201404.mbox/%3CCALFqTqR7HeZ=k2cdrtbq_ntw52yppokaska_hzrjgf+qrh2...@mail.gmail.com%3E


Re: muliple-nodes kafka cluster

2014-09-16 Thread Harsha
Hi Alec,

   Single node kafka cluster not recommended apart from
using it for development. I highly recommend using multinode
cluster and create a partitioned topic with replication. This
not only makes it optimal to take in more data at faster rates
also allows your cluster running if there is a node failure as
the topic is replicated there wouldn't be huge data loss.



" If I am using multiple-nodes, the tradeoff is the connection
time among different nodes?"

  kafka producer api sends a message to broker either
round-robin or based on partition function.

please go through the kafka docs
here [1]http://kafka.apache.org/documentation.html for simple
consumer and also how the replication works among multiple
nodes.



-Harsha





On Tue, Sep 16, 2014, at 02:06 PM, Sa Li wrote:

Hi, All

I have been using kafka cluster in single server with three
brokers, but I am thinking to build a larger kafka cluster, say
4 nodes (server), and 3 brokers in each node, so totally 12
brokers, would that be better than single node cluster? Or
single node will be fair enough, since web api may push million
rows into kafka cluster every day, I am kinda worry if the
cluster is capable to take such much data without losing data.
If I am using multiple-nodes, the tradeoff is the connection
time among different nodes?


thanks

Alec

References

1. http://kafka.apache.org/documentation.html


Re: Storm 0.9.2-incubating - num workers and num executors switched?

2014-09-16 Thread Harsha
Hi Jing,

Its the UI bug fixed in the
trunk. [1]https://issues.apache.org/jira/browse/STORM-369

-Harsha





On Tue, Sep 16, 2014, at 12:45 PM, Tao, Jing wrote:

We recently upgraded to Storm 0.9.2-incubating, and found that
on the UI, Num workers and Num executors switched.

Example:

In older version (0.9.0.1):

cid:image001.png@01CFD1C4.B986FB20

In new version (0.9.2-incubating):

cid:image002.png@01CFD1C4.B986FB20

Is this a UI bug?  Or did something change in Storm core
functionality?

Thanks,

Jing

  Email had 2 attachments:
  * image001.png
  13k (image/png)
  * image002.png
  14k (image/png)

References

1. https://issues.apache.org/jira/browse/STORM-369


Re: How Do Workers Connect To Nimbus

2014-09-08 Thread Harsha
Stephen,

  I am not able to reach that IP. But you shouldn't
modify the default.yaml just change storm.yaml under conf

"Will the storm.yaml be the same on my worker and nimbus
machine?"

it should be the same on both machines. Make sure your
zookeeper also running on that ip. And check for logs under you
storm installation it should be under logs dir.

-Harsha





On Mon, Sep 8, 2014, at 05:23 PM, Stephen Hartzell wrote:

All,

  I implemented the suggestions given by Parh and Harsha. I am
now using the default.yaml but I changed the
storm.zookeeper.servers to the nimbus machine's ip address:
54.68.149.181. I also changed the nimbus.host to 54.68.149.181.
I also opened up port 6627. Now, the UI web page gives the
following
error: org.apache.thrift7.transport.TTransportException:
java.net.ConnectException: Connection refused

You should be able to see the error it gives by going to the
web page yourself at: [1]http://54.68.149.181:8080. I am only
using this account to test and see if I can even get storm to
work, so these machines are only for testing. Perhaps someone
could tell me what the storm.yaml file should look like for
this setup?

-Thanks, Stephne


On Mon, Sep 8, 2014 at 7:41 PM, Stephen Hartzell
<[2]hartzell.step...@gmail.com> wrote:

I'm getting kind of confused by the storm.yaml file. Should I
be using the default.yaml and just modify the zookeeper and
nimbus ip, or should I use a bran new storm.yaml?

My nimbus machine has the ip address: 54.68.149.181.My
zookeeper is on the nimbus machine. what should the storm.yaml
look like on my worker and nimbus machine? Will the storm.yaml
be the same on my worker and nimbus machine? I am not trying to
do anything fancy, I am just trying to get a very basic cluster
up and running.

-Thanks, Stephen


On Mon, Sep 8, 2014 at 7:00 PM, Stephen Hartzell
<[3]hartzell.step...@gmail.com> wrote:

  All Thanks so much for your help. I cannot tell you how much
I appreciate it. I'm going to try out your suggestions and keep
banging my head again the wall : D. I've spent an enormous
amount of time trying to get this to work. I'll let you know
what happens after I try to implement your suggestions. It
would be really cool if someone had a tutorial that detailed
this part. (I'll make it myself if I ever get this to work!) It
seems like trying to get a two-machine cluster setup on AWS
would be a VERY common use-case. I've read and watched
everything I can on the topic and nothing got it working for
me!


On Mon, Sep 8, 2014 at 6:54 PM, Parth Brahmbhatt
<[4]pbrahmbh...@hortonworks.com> wrote:

The worker connects to the thrift port and not the ui port. You
need to open port 6627 or whatever is the value being set in
storm.yaml using  property “nimbus.thrift.port”.

Based on the configuration that you have pointed so far it
seems your nimbus host has nimbus,ui,supervisor working because
you actually have zookeeper running locally on that host. As
Harsha pointed out you need to change it to a value that is the
public ip instead of loopback interface.

Thanks
Parth


On Sep 8, 2014, at 3:42 PM, Harsha <[5]st...@harsha.io> wrote:

storm.zookeeper.servers:
 - "127.0.0.1"
nimbus.host: "127.0.0.1" ( 127.0.0.1 causes to bind a loopback
interface , instead either use your public ip or 0.0.0.0)
storm.local.dir: /tmp/storm ( I recommend this to move to a
different folder probably /home/storm, /tmp/storm will get
deleted if your machine is restarted)

make sure you zookeeper is also listening in 0.0.0.0 or public
ip not 127.0.0.1.

"No, I cannot ping my host which has a public ip address of
54.68.149.181"
you are not able to reach this ip form worker node but able to
access the UI using it?
-Harsha

On Mon, Sep 8, 2014, at 03:34 PM, Stephen Hartzell wrote:

Harsha,

  The storm.yaml on the host machine looks like this:

storm.zookeeper.servers:
 - "127.0.0.1"


nimbus.host: "127.0.0.1"

storm.local.dir: /tmp/storm


  The storm.yaml on the worker machine looks like this:

storm.zookeeper.servers:
 - "54.68.149.181"


nimbus.host: "54.68.149.181"

storm.local.dir: /tmp/storm

No, I cannot ping my host which has a public ip address of
54.68.149.181 although I can connect to the UI web page when it
is hosted. I don't know how I would go about connecting to
zookeeper on the nimbus host.
-Thanks, Stephen


On Mon, Sep 8, 2014 at 6:28 PM, Harsha <[6]st...@harsha.io>
wrote:

There aren't any errors in worker machine supervisor logs. Are
you using the same storm.yaml for both the machines and also
are you able to ping your nimbus host or connect to zookeeper
on nimbus host.
-Harsha


On Mon, Sep 8, 2014, at 03:24 PM, Stephen Hartzell wrote:

Harsha,

  Thanks so much for getting back with me. I will check the
logs, but I don't seem to get any error messages. I have a
nimbus AWS machine with zookeepe

Re: How Do Workers Connect To Nimbus

2014-09-08 Thread Harsha
storm.zookeeper.servers:
 - "127.0.0.1"
nimbus.host: "127.0.0.1" ( 127.0.0.1 causes to bind a loopback
interface , instead either use your public ip or 0.0.0.0)
storm.local.dir: /tmp/storm ( I recommend this to move to a
different folder probably /home/storm, /tmp/storm will get
deleted if your machine is restarted)

make sure you zookeeper is also listening in 0.0.0.0 or public
ip not 127.0.0.1.

"No, I cannot ping my host which has a public ip address of
54.68.149.181"
you are not able to reach this ip form worker node but able to
access the UI using it?
-Harsha

On Mon, Sep 8, 2014, at 03:34 PM, Stephen Hartzell wrote:

Harsha,

  The storm.yaml on the host machine looks like this:

storm.zookeeper.servers:
 - "127.0.0.1"


nimbus.host: "127.0.0.1"

storm.local.dir: /tmp/storm


  The storm.yaml on the worker machine looks like this:

storm.zookeeper.servers:
 - "54.68.149.181"


nimbus.host: "54.68.149.181"

storm.local.dir: /tmp/storm

No, I cannot ping my host which has a public ip address of
54.68.149.181 although I can connect to the UI web page when it
is hosted. I don't know how I would go about connecting to
zookeeper on the nimbus host.
-Thanks, Stephen


On Mon, Sep 8, 2014 at 6:28 PM, Harsha <[1]st...@harsha.io>
wrote:

There aren't any errors in worker machine supervisor logs. Are
you using the same storm.yaml for both the machines and also
are you able to ping your nimbus host or connect to zookeeper
on nimbus host.
-Harsha


On Mon, Sep 8, 2014, at 03:24 PM, Stephen Hartzell wrote:

Harsha,

  Thanks so much for getting back with me. I will check the
logs, but I don't seem to get any error messages. I have a
nimbus AWS machine with zookeeper on it and a worker AWS
machine.

On the nimbus machine I start the zookeeper and then I run:

bin/storm nimbus &
bin/storm supervisor &
bin/storm ui

On the worker machine I run:
bin/storm supervisor

When I go to the UI page, I only see 1 supervisor (the one on
the nimbus machine). So apparently, the worker machine isn't
"registering" with the nimbus machine.


On Mon, Sep 8, 2014 at 6:16 PM, Harsha <[2]st...@harsha.io>
wrote:

Hi Stephen,
What are the issues you are seeing.
"How do worker machines "know" how to connect to nimbus? Is it
in the storm configuration file"
Yes. make sure you the supervisor(worker) , nimbus nodes  are
able to connect to your zookeeper cluster.
Check your logs under storm_inst/logs/ for any errors when you
try to start nimbus or supervisors.
If you are installing it manually try following these steps if
you are not already done.
[3]http://www.michael-noll.com/tutorials/running-multi-node-sto
rm-cluster/
-Harsha



On Mon, Sep 8, 2014, at 03:01 PM, Stephen Hartzell wrote:

All,

I would greatly appreciate any help that anyone would afford.
I've been trying to setup a storm cluster on AWS for a few
weeks now on centOS EC2 machines. So far, I haven't been able
to get a cluster built. I can get a supervisor and nimbus to
run on a single machine, but I can't figure out how to get
another worker to connect to nimbus. How do worker machines
"know" how to connect to nimbus? Is it in the storm
configuration file? I've gone through many tutorials and the
official documentation, but this point doesn't seem to be
covered anywhere in sufficient detail for a new guy like me.

  Some of you may be tempted to point me toward storm-deploy,
but I spent four days trying to get that to work until I gave
up. I'm having Issue #58 on github. Following the instructions
exactly and other tutorials on a bran new AWS machine fails. So
I gave up on storm-deploy and decided to try and setup a
cluster manually. Thanks in advance to anyone willing to offer
me any inputs you can!

References

1. mailto:st...@harsha.io
2. mailto:st...@harsha.io
3. http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/


Re: How Do Workers Connect To Nimbus

2014-09-08 Thread Harsha
There aren't any errors in worker machine supervisor logs. Are
you using the same storm.yaml for both the machines and also
are you able to ping your nimbus host or connect to zookeeper
on nimbus host.

-Harsha





On Mon, Sep 8, 2014, at 03:24 PM, Stephen Hartzell wrote:

Harsha,

  Thanks so much for getting back with me. I will check the
logs, but I don't seem to get any error messages. I have a
nimbus AWS machine with zookeeper on it and a worker AWS
machine.

On the nimbus machine I start the zookeeper and then I run:

bin/storm nimbus &
bin/storm supervisor &
bin/storm ui

On the worker machine I run:
bin/storm supervisor

When I go to the UI page, I only see 1 supervisor (the one on
the nimbus machine). So apparently, the worker machine isn't
"registering" with the nimbus machine.


On Mon, Sep 8, 2014 at 6:16 PM, Harsha <[1]st...@harsha.io>
wrote:

Hi Stephen,
What are the issues you are seeing.
"How do worker machines "know" how to connect to nimbus? Is it
in the storm configuration file"
Yes. make sure you the supervisor(worker) , nimbus nodes  are
able to connect to your zookeeper cluster.
Check your logs under storm_inst/logs/ for any errors when you
try to start nimbus or supervisors.
If you are installing it manually try following these steps if
you are not already done.
[2]http://www.michael-noll.com/tutorials/running-multi-node-sto
rm-cluster/
-Harsha



On Mon, Sep 8, 2014, at 03:01 PM, Stephen Hartzell wrote:

All,

I would greatly appreciate any help that anyone would afford.
I've been trying to setup a storm cluster on AWS for a few
weeks now on centOS EC2 machines. So far, I haven't been able
to get a cluster built. I can get a supervisor and nimbus to
run on a single machine, but I can't figure out how to get
another worker to connect to nimbus. How do worker machines
"know" how to connect to nimbus? Is it in the storm
configuration file? I've gone through many tutorials and the
official documentation, but this point doesn't seem to be
covered anywhere in sufficient detail for a new guy like me.

  Some of you may be tempted to point me toward storm-deploy,
but I spent four days trying to get that to work until I gave
up. I'm having Issue #58 on github. Following the instructions
exactly and other tutorials on a bran new AWS machine fails. So
I gave up on storm-deploy and decided to try and setup a
cluster manually. Thanks in advance to anyone willing to offer
me any inputs you can!

References

1. mailto:st...@harsha.io
2. http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/


Re: How Do Workers Connect To Nimbus

2014-09-08 Thread Harsha
Hi Stephen,

What are the issues you are seeing.

"How do worker machines "know" how to connect to nimbus? Is it
in the storm configuration file"

Yes. make sure you the supervisor(worker) , nimbus nodes  are
able to connect to your zookeeper cluster.

Check your logs under storm_inst/logs/ for any errors when you
try to start nimbus or supervisors.

If you are installing it manually try following these steps if
you are not already done.

[1]http://www.michael-noll.com/tutorials/running-multi-node-sto
rm-cluster/

-Harsha







On Mon, Sep 8, 2014, at 03:01 PM, Stephen Hartzell wrote:

All,

I would greatly appreciate any help that anyone would afford.
I've been trying to setup a storm cluster on AWS for a few
weeks now on centOS EC2 machines. So far, I haven't been able
to get a cluster built. I can get a supervisor and nimbus to
run on a single machine, but I can't figure out how to get
another worker to connect to nimbus. How do worker machines
"know" how to connect to nimbus? Is it in the storm
configuration file? I've gone through many tutorials and the
official documentation, but this point doesn't seem to be
covered anywhere in sufficient detail for a new guy like me.

  Some of you may be tempted to point me toward storm-deploy,
but I spent four days trying to get that to work until I gave
up. I'm having Issue #58 on github. Following the instructions
exactly and other tutorials on a bran new AWS machine fails. So
I gave up on storm-deploy and decided to try and setup a
cluster manually. Thanks in advance to anyone willing to offer
me any inputs you can!

References

1. http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/


Re: Is there a Tweeter Streaming Spout?

2014-09-08 Thread Harsha
Just to note its an example spout . But I am not sure why it
wouldn't allow parallelism of more than 1. For twitter api its
http calls thats by increasing the parallelism of the spout you
are making more calls to twitter api. I think twitter rate
limits based on appid thats the limitation I can see but thats
not related to spout parallelism. Even with a single spout
instance you can go over your api call rate limit.



-Harsha





On Mon, Sep 8, 2014, at 06:35 AM, Vikas Agarwal wrote:

That is interesting. However, it won't allow spout parallelism
more than 1, right?



On Mon, Sep 8, 2014 at 6:56 PM, Harsha <[1]st...@harsha.io>
wrote:

Hi Connie,
  You can take a look at twittersamplespout in
examples [2]https://github.com/apache/incubator-storm/blob/mast
er/examples/storm-starter/src/jvm/storm/starter/spout/TwitterSa
mpleSpout.java
It uses twitter4j to read the api you can make changes to fit
your needs.
-Harsha


On Mon, Sep 8, 2014, at 12:15 AM, Vikas Agarwal wrote:

I guess no and it won't make sense to have one because it would
limit the parallelism of spout. Twitter stream allows only
single connection to the stream. You can use threading to have
parallelism in stream consumption but it would be difficult to
manage it with spouts. Better solution would be to writer
standalone twitter stream listener with multithreading and push
messages to Kafka (or some JMS queue) and then consume them
using KafkaSpout for instance. It would allow you to increase
parallelism of spout by the number of partitions of the topics.



On Mon, Sep 8, 2014 at 12:21 PM, Connie Yang
<[3]cybercon...@gmail.com> wrote:

Hi,

Is there a spout that stream tweeter feed based a list of
hashtags?

Thanks,
Connie




--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[4]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
[5]+1 (408) 988-2000 Work
[6]+1 (408) 716-2726 Fax





--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[7]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

References

1. mailto:st...@harsha.io
2. 
https://github.com/apache/incubator-storm/blob/master/examples/storm-starter/src/jvm/storm/starter/spout/TwitterSampleSpout.java
3. mailto:cybercon...@gmail.com
4. http://www.infoobjects.com/
5. tel:%2B1%20%28408%29%20988-2000
6. tel:%2B1%20%28408%29%20716-2726
7. http://www.infoobjects.com/


Re: Is there a Tweeter Streaming Spout?

2014-09-08 Thread Harsha
Hi Connie,

  You can take a look at twittersamplespout in
examples [1]https://github.com/apache/incubator-storm/blob/mast
er/examples/storm-starter/src/jvm/storm/starter/spout/TwitterSa
mpleSpout.java

It uses twitter4j to read the api you can make changes to fit
your needs.

-Harsha





On Mon, Sep 8, 2014, at 12:15 AM, Vikas Agarwal wrote:

I guess no and it won't make sense to have one because it would
limit the parallelism of spout. Twitter stream allows only
single connection to the stream. You can use threading to have
parallelism in stream consumption but it would be difficult to
manage it with spouts. Better solution would be to writer
standalone twitter stream listener with multithreading and push
messages to Kafka (or some JMS queue) and then consume them
using KafkaSpout for instance. It would allow you to increase
parallelism of spout by the number of partitions of the topics.



On Mon, Sep 8, 2014 at 12:21 PM, Connie Yang
<[2]cybercon...@gmail.com> wrote:

Hi,

Is there a spout that stream tweeter feed based a list of
hashtags?

Thanks,
Connie




--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[3]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

References

1. 
https://github.com/apache/incubator-storm/blob/master/examples/storm-starter/src/jvm/storm/starter/spout/TwitterSampleSpout.java
2. mailto:cybercon...@gmail.com
3. http://www.infoobjects.com/


Re: cannot run ready project

2014-09-07 Thread Harsha
can you give bit more details on which project you are using.
If its available on github I can try it out.

-Harsha





On Sun, Sep 7, 2014, at 05:18 AM, researcher cs wrote:

any help about this ?


On Thu, Sep 4, 2014 at 2:59 PM, researcher cs
<[1]prog.researc...@gmail.com> wrote:

Thanks for relying : i'm using  Eclipse Java EE IDE for Web
Developers.

Version: Kepler Release



On Wed, Sep 3, 2014 at 11:32 PM, P. Taylor Goetz
<[2]ptgo...@gmail.com> wrote:

What IDE are you using?


> On Sep 3, 2014, at 5:26 PM, researcher cs
<[3]prog.researc...@gmail.com> wrote:
>
> any help .. ?
>
>> On 9/2/14, researcher cs <[4]prog.researc...@gmail.com>
wrote:
>> i imported ready project and when run it i got this
>>
>> Resource Path Location Type The project was not built since
its build
>> path is incomplete. Cannot find the class file for
>> storm.trident.state.State. Fix the build path then try
building this
>> project first-stories-twitter-master Unknown Java Problem
The type
>>
>> storm.trident.state.State cannot be resolved. It is
indirectly
>> referenced from required .class files RecentTweetsDB.java
>> /first-stories-twitter-master/src/main/java/trident/state
>>
>> can i find help on this ?
>>

References

1. mailto:prog.researc...@gmail.com
2. mailto:ptgo...@gmail.com
3. mailto:prog.researc...@gmail.com
4. mailto:prog.researc...@gmail.com


Re: Using Kafka 0.7 with Storm 0.9.2

2014-09-05 Thread Harsha
Saurabh,

Storm 0.9.0 didin't ship kafka connector but 0.9.2
comes with kafka connector. It used be external
project [1]https://github.com/wurstmeister/storm-kafka-0.8-plus
 and it works with kafka 0.8.  You can modify the connector to
work with kafka 0.7 . Storm core doesn't have any dependency on
kafka so you can have your own version of kafka spout.

-Harsha







On Thu, Sep 4, 2014, at 11:42 PM, Saurabh Minni wrote:

Hi,
I can see that with Storm 0.9.2 has Kafka Spout which is using
0.8.x of Kafka.

I have a setup with Kafka 0.7 and for some reason moving to
0.8.x is not possible.

So my question is that if I have to use Kafka 0.7, should I
stick to Storm 0.9.0 and not look at Storm 0.9.2 at all.

Or is there some way to use Storm 0.9.2 with Kafka 0.7

Thanks,
Saurabh

References

1. https://github.com/wurstmeister/storm-kafka-0.8-plus


Re: Kafka Spout Warnings

2014-09-03 Thread Harsha
Hi Nick,

 Whats your log.retention set on kafka. It might be that
kafka is deleting your data before KafkaSpout is able to
consume.

-Harsha





On Wed, Sep 3, 2014, at 10:01 AM, Nick Beenham wrote:

We have started to see a lot of these errors within the logs,
and the tuples being emitted but not transferred from the spout
to the bolt.

Any ideas?

2014-09-03 16:18:28 s.k.KafkaUtils [WARN] Got fetch request
with offset out of range: [9551]; retrying with default start
offset time from configuration. configured start offset time:
[-2] offset: [0]

2014-09-03 16:18:28 s.k.KafkaUtils [WARN] Got fetch request
with offset out of range: [616248]; retrying with default start
offset time from configuration. configured start offset time:
[-2] offset: [0]

Thanks,

Nick


Re: Issues with Topology with Kafka Spout

2014-09-03 Thread Harsha


Vikas,

   "Kafka server is started with default properties except the
log retention period being 15 minutes"

This seems very aggressive log retention on kafka side hence
you might be running into "Got fetch request with offset out of
range"



" Too many failed messages at spout. I assumed that initially
when topology starts, because of initialization latency, there
might be few thousands of messages which fail, however, it
seems that this behavior is not limited to initialization and
messages fails quite often and very rarely I am seeing that
there is no failed message in last 10 minutes. :)"



Have you seen any errors in worker logs?. Failed at messages at
the spout is bit confusing it might be that your bolts failing
and spout receiving a "fail" acknowledgement from the bolts.



Every time I submit my topology, it takes more than 10 minutes
to reach messages to the first bolt. First spout tries to
accumulate message (which too many failed messages) for first
few minutes (10 mins or so)

  This seems strange. How many partitions your topic has and
whats the parallelism on the spout.

-Harsha



On Tue, Sep 2, 2014, at 10:22 PM, Vikas Agarwal wrote:

Hi,

I am not sure if this mailing list would be the correct place
for this, however, I decided to ask here assuming many of storm
cluster installations involve Kafka as their spout.

I have set following properties for Kafka Spout:

kafkaConfig.bufferSizeBytes = 1024 * 1024 * 4;
kafkaConfig.fetchSizeBytes = 1024 * 1024 * 4;
kafkaConfig.forceFromStart = true|false; (tried both, true and
false)

Kafka server is started with default properties except the log
retention period being 15 minutes.

And Storm configuration is as mentioned the Michael Noll's
[1]blog

conf.put(Config.TOPOLOGY_RECEIVER_BUFFER_SIZE,
8);conf.put(Config.TOPOLOGY_TRANSFER_BUFFER_SIZE,
32);conf.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE,
16384);conf.put(Config.TOPOLOGY_EXECUTOR_SEND_BUFFER_SIZE,
16384);

topology.max.spout.pending = 1

I am using Hortonworks distribution for installing Hadoop
ecosystem.

We are consuming twitter stream and pushing the tweets to a
Kafka topic and then Storm topology is trying to consume those
tweets using KafkaSpout with configuration described above. We
are using twitter filter stream and we have many filter
keywords so the input flux is quite high (not high as with
firehose but still very high) and varies quite a lot depending
on time of the day and any of the keywords, used as track
filter, being viral on a particular day.

Now I am facing 3 major issues with my topology (which contains
3 bolts after the kafka spout)

1) Too many failed messages at spout. I assumed that initially
when topology starts, because of initialization latency, there
might be few thousands of messages which fail, however, it
seems that this behavior is not limited to initialization and
messages fails quite often and very rarely I am seeing that
there is no failed message in last 10 minutes. :)

2) After a while Kafka spout begins to throw "Got fetch request
with offset out of range" error message continuously and never
picks any message from the kafka topic while the stream
collector is still able to push the messages to the topic.

3) Every time I submit my topology, it takes more than 10
minutes to reach messages to the first bolt. First spout tries
to accumulate message (which too many failed messages) for
first few minutes (10 mins or so) and then each bolt start
accumulating messages sequentially and after 15-20 min, every
bolt in the topology has some messages to process. I am not
able to understand why a message that has been processed by
spout, is not delivered to next bolt immediately. I guess the
message buffers as described in Michael Noll's blog are
responsible for this but still changing the buffers didn't make
any change in behavior.

--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[2]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

References

1. 
http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/
2. http://www.infoobjects.com/


Re: REMOTE MODE STORM DEV

2014-09-03 Thread Harsha


Pavan,

 Which user starting the storm daemons, from your previous
emails its looks like you are starting them as user "storm" .
storm dir is owned by root and storm daemons tries to write to
storm-local and also log files which might be what causing
issues. I recommend you to go through this
tutorial [1]http://www.michael-noll.com/tutorials/running-multi
-node-storm-cluster/

Although its a multinode cluster setup you can adapt it to
single host. Running storm or such services as root is bad
idea.

Above tutorial talks about creating user storm and running the
services as "storm" user. Try to setup your installation that
way.

-Harsha



On Wed, Sep 3, 2014, at 12:09 AM, Pavan Jakati G wrote:

Hi Harsha,

I am running it on single host . Attached is the storm.yaml
file . Permissions of the directory is as below ,

ls -ld /root/apache-storm-0.9.2-incubating

drwxrwxrwx 11 root root 4096 Sep  2 05:33
/root/apache-storm-0.9.2-incubating

ll /root/apache-storm-0.9.2-incubating

total 120

drwxrwxrwx 3 root root  4096 Sep  1 09:49 bin

-rw-r--r-- 1 root root 34239 Jun 12 20:46 CHANGELOG.md

drwxrwxrwx 2 root root  4096 Sep  1 12:31 conf

-rw-r--r-- 1 root root   538 Mar 12 23:17 DISCLAIMER

drwxrwxrwx 3 root root  4096 Jun 16 12:22 examples

drwxrwxrwx 3 root root  4096 Jun 16 12:22 external

drwxrwxrwx 2 root root  4096 Jun 16 12:22 lib

-rw-r--r-- 1 root root 22822 Jun 11 16:07 LICENSE

drwxrwxrwx 2 root root  4096 Jun 16 12:22 logback

drwxr-xr-x 3 root root  4096 Sep  2 13:08 logs

-rw-r--r-- 1 root root   981 Jun 10 13:10 NOTICE

drwxrwxrwx 5 root root  4096 Jun 16 12:22 public

-rw-r--r-- 1 root root  7445 Jun  9 14:24 README.markdown

-rw-r--r-- 1 root root17 Jun 16 12:22 RELEASE

-rw-r--r-- 1 root root  3581 May 29 12:20 SECURITY.md

drwxr-xr-x 4 root root  4096 Sep  2 05:34 storm-local

Regards,

PaVan…

From: Harsha [mailto:st...@harsha.io]
Sent: 03 September 2014 00:02
To: user@storm.incubator.apache.org
Subject: Re: REMOTE MODE STORM DEV

Pavan,

   It would be helpful if you can post your storm.yaml.
Make sure user "storm" has permissions to your storm
installation dir. and you used the same storm config on all
your machines.

-Harsha

On Tue, Sep 2, 2014, at 06:53 AM, Supun Kamburugamuva wrote:

Hi Pavan,

It seems you have a permission issue. Please check weather the
directories that the storm user has appropriate permissions on
the directories that contain the storm jars.

Thanks,

Supun..

On Tue, Sep 2, 2014 at 9:13 AM, Pavan Jakati G
<[2]pava...@microland.com> wrote:

Can anybody help us get rid of below error :

sudo -u storm /usr/java/jdk1.7.0_65/bin/java  -server -Xmx768m
-Djava.library.path=storm-local/supervisor/stormdist/PaVan-14-1
409661199/resources/Linux-amd64:storm-local/supervisor/stormdis
t/PaVan-14-1409661199/resources:/usr/local/lib:/opt/local/lib:/
usr/lib -Dlogfile.name=worker-6703.log
-Dstorm.home=/root/apache-storm-0.9.2-incubating
-Dlogback.configurationFile=/root/apache-storm-0.9.2-incubating
/logback/cluster.xml -Dstorm.id=PaVan-14-1409661199
-Dworker.id=156a8af9-fa3b-4772-b91c-787490fe0b34
-Dworker.port=6703 -cp
/root/apache-storm-0.9.2-incubating/lib/json-simple-1.1.jar:/ro
ot/apache-storm-0.9.2-incubating/lib/core.incubator-0.1.0.jar:/
root/apache-storm-0.9.2-incubating/lib/asm-4.0.jar:/root/apache
-storm-0.9.2-incubating/lib/commons-io-2.4.jar:/root/apache-sto
rm-0.9.2-incubating/lib/httpclient-4.3.3.jar:/root/apache-storm
-0.9.2-incubating/lib/jline-2.11.jar:/root/apache-storm-0.9.2-i
ncubating/lib/tools.logging-0.2.3.jar:/root/apache-storm-0.9.2-
incubating/lib/logback-classic-1.0.6.jar:/root/apache-storm-0.9
.2-incubating/lib/commons-logging-1.1.3.jar:/root/apache-storm-
0.9.2-incubating/lib/ring-core-1.1.5.jar:/root/apache-storm-0.9
.2-incubating/lib/ring-devel-0.3.11.jar:/root/apache-storm-0.9.
2-incubating/lib/curator-client-2.4.0.jar:/root/apache-storm-0.
9.2-incubating/lib/clj-stacktrace-0.2.4.jar:/root/apache-storm-
0.9.2-incubating/lib/clj-time-0.4.1.jar:/root/apache-storm-0.9.
2-incubating/lib/commons-lang-2.5.jar:/root/apache-storm-0.9.2-
incubating/lib/zookeeper-3.4.5.jar:/root/apache-storm-0.9.2-inc
ubating/lib/compojure-1.1.3.jar:/root/apache-storm-0.9.2-incuba
ting/lib/joda-time-2.0.jar:/root/apache-storm-0.9.2-incubating/
lib/chill-java-0.3.5.jar:/root/apache-storm-0.9.2-incubating/li
b/clout-1.0.1.jar:/root/apache-storm-0.9.2-incubating/lib/kryo-
2.21.jar:/root/apache-storm-0.9.2-incubating/lib/snakeyaml-1.11
.jar:/root/apache-storm-0.9.2-incubating/lib/minlog-1.2.jar:/ro
ot/apache-storm-0.9.2-incubating/lib/storm-core-0.9.2-incubatin
g.jar:/root/apache-storm-0.9.2-incubating/lib/jgrapht-core-0.9.
0.jar:/root/apache-storm-0.9.2-incubating/lib/slf4j-api-1.6.5.j
ar:/root/apache-storm-0.9.2-incubating/lib/hiccup-0.3.6.jar:/ro
ot/apache-storm-0.9.2-incubating/lib/netty-3.6.3.Final.jar:/roo
t/apache-storm-0.9.2-incubating/lib/curator-framework-2.4.0.jar
:/root/apache-storm-0.9.2-incubating/lib/guav

Re: REMOTE MODE STORM DEV

2014-09-02 Thread Harsha
Pavan,

   It would be helpful if you can post your storm.yaml.
Make sure user "storm" has permissions to your storm
installation dir. and you used the same storm config on all
your machines.



-Harsha



On Tue, Sep 2, 2014, at 06:53 AM, Supun Kamburugamuva wrote:

Hi Pavan,

It seems you have a permission issue. Please check weather the
directories that the storm user has appropriate permissions on
the directories that contain the storm jars.

Thanks,
Supun..



On Tue, Sep 2, 2014 at 9:13 AM, Pavan Jakati G
<[1]pava...@microland.com> wrote:

Can anybody help us get rid of below error :

sudo -u storm /usr/java/jdk1.7.0_65/bin/java  -server -Xmx768m
-Djava.library.path=storm-local/supervisor/stormdist/PaVan-14-1
409661199/resources/Linux-amd64:storm-local/supervisor/stormdis
t/PaVan-14-1409661199/resources:/usr/local/lib:/opt/local/lib:/
usr/lib -Dlogfile.name=worker-6703.log
-Dstorm.home=/root/apache-storm-0.9.2-incubating
-Dlogback.configurationFile=/root/apache-storm-0.9.2-incubating
/logback/cluster.xml -Dstorm.id=PaVan-14-1409661199
-Dworker.id=156a8af9-fa3b-4772-b91c-787490fe0b34
-Dworker.port=6703 -cp
/root/apache-storm-0.9.2-incubating/lib/json-simple-1.1.jar:/ro
ot/apache-storm-0.9.2-incubating/lib/core.incubator-0.1.0.jar:/
root/apache-storm-0.9.2-incubating/lib/asm-4.0.jar:/root/apache
-storm-0.9.2-incubating/lib/commons-io-2.4.jar:/root/apache-sto
rm-0.9.2-incubating/lib/httpclient-4.3.3.jar:/root/apache-storm
-0.9.2-incubating/lib/jline-2.11.jar:/root/apache-storm-0.9.2-i
ncubating/lib/tools.logging-0.2.3.jar:/root/apache-storm-0.9.2-
incubating/lib/logback-classic-1.0.6.jar:/root/apache-storm-0.9
.2-incubating/lib/commons-logging-1.1.3.jar:/root/apache-storm-
0.9.2-incubating/lib/ring-core-1.1.5.jar:/root/apache-storm-0.9
.2-incubating/lib/ring-devel-0.3.11.jar:/root/apache-storm-0.9.
2-incubating/lib/curator-client-2.4.0.jar:/root/apache-storm-0.
9.2-incubating/lib/clj-stacktrace-0.2.4.jar:/root/apache-storm-
0.9.2-incubating/lib/clj-time-0.4.1.jar:/root/apache-storm-0.9.
2-incubating/lib/commons-lang-2.5.jar:/root/apache-storm-0.9.2-
incubating/lib/zookeeper-3.4.5.jar:/root/apache-storm-0.9.2-inc
ubating/lib/compojure-1.1.3.jar:/root/apache-storm-0.9.2-incuba
ting/lib/joda-time-2.0.jar:/root/apache-storm-0.9.2-incubating/
lib/chill-java-0.3.5.jar:/root/apache-storm-0.9.2-incubating/li
b/clout-1.0.1.jar:/root/apache-storm-0.9.2-incubating/lib/kryo-
2.21.jar:/root/apache-storm-0.9.2-incubating/lib/snakeyaml-1.11
.jar:/root/apache-storm-0.9.2-incubating/lib/minlog-1.2.jar:/ro
ot/apache-storm-0.9.2-incubating/lib/storm-core-0.9.2-incubatin
g.jar:/root/apache-storm-0.9.2-incubating/lib/jgrapht-core-0.9.
0.jar:/root/apache-storm-0.9.2-incubating/lib/slf4j-api-1.6.5.j
ar:/root/apache-storm-0.9.2-incubating/lib/hiccup-0.3.6.jar:/ro
ot/apache-storm-0.9.2-incubating/lib/netty-3.6.3.Final.jar:/roo
t/apache-storm-0.9.2-incubating/lib/curator-framework-2.4.0.jar
:/root/apache-storm-0.9.2-incubating/lib/guava-13.0.jar:/root/a
pache-storm-0.9.2-incubating/lib/log4j-over-slf4j-1.6.6.jar:/ro
ot/apache-storm-0.9.2-incubating/lib/commons-fileupload-1.2.1.j
ar:/root/apache-storm-0.9.2-incubating/lib/servlet-api-2.5.jar:
/root/apache-storm-0.9.2-incubating/lib/reflectasm-1.07-shaded.
jar:/root/apache-storm-0.9.2-incubating/lib/jetty-util-6.1.26.j
ar:/root/apache-storm-0.9.2-incubating/lib/objenesis-1.2.jar:/r
oot/apache-storm-0.9.2-incubating/lib/tools.cli-0.2.4.jar:/root
/apache-storm-0.9.2-incubating/lib/ring-jetty-adapter-0.3.11.ja
r:/root/apache-storm-0.9.2-incubating/lib/commons-codec-1.6.jar
:/root/apache-storm-0.9.2-incubating/lib/clojure-1.5.1.jar:/roo
t/apache-storm-0.9.2-incubating/lib/netty-3.2.2.Final.jar:/root
/apache-storm-0.9.2-incubating/lib/math.numeric-tower-0.0.1.jar
:/root/apache-storm-0.9.2-incubating/lib/carbonite-1.4.0.jar:/r
oot/apache-storm-0.9.2-incubating/lib/disruptor-2.10.1.jar:/roo
t/apache-storm-0.9.2-incubating/lib/commons-exec-1.1.jar:/root/
apache-storm-0.9.2-incubating/lib/tools.macro-0.1.0.jar:/root/a
pache-storm-0.9.2-incubating/lib/jetty-6.1.26.jar:/root/apache-
storm-0.9.2-incubating/lib/httpcore-4.3.2.jar:/root/apache-stor
m-0.9.2-incubating/lib/servlet-api-2.5-20081211.jar:/root/apach
e-storm-0.9.2-incubating/lib/ring-servlet-0.3.11.jar:/root/apac
he-storm-0.9.2-incubating/lib/logback-core-1.0.6.jar:/root/apac
he-storm-0.9.2-incubating/conf:storm-local/supervisor/stormdist
/PaVan-14-1409661199/stormjar.jar backtype.storm.daemon.worker
PaVan-14-1409661199 0574446a-f73d-42b0-bcc7-e6dd449cb75a 6703
156a8af9-fa3b-4772-b91c-787490fe0b34

Error: Could not find or load main class
backtype.storm.daemon.worker

Regards,

PaVan…

From: Pavan Jakati G
Sent: 02 September 2014 15:23

To: [2]user@storm.incubator.apache.org
Subject: RE: REMOTE MODE STORM DEV

sudo -u storm '/usr/java/jdk1.7.0_65/bin/java' '-server'
'-Xmx768m' '-Djava.library.path=storm-local/sup

ervisor/stormdist/PaVan-10-1409648607/resources/Linux-amd64:st

Re: Supervisor always down 3s after execution

2014-09-02 Thread Harsha
Hi Benjamin,

 Correct me if I missed it  , in your config  I don't
see storm.local.dir defined. If its not defined in config storm
will create one in the storm_installation dir which seems to
be

/home/bsoulas/incubator-storm-master/storm-dist/binary/target/a
pache-storm-0.9.3-ben/apache-storm-0.9.3-ben/

and are you running the supervisor and nimbus as user
"bsoulas". When you are running "storm nimbus or storm
supervisor" command which storm command its pointing. Did you
export
STORM_HOME=/home/bsoulas/incubator-storm-master/storm-dist/bina
ry/target/apache-storm-0.9.3-ben" and also added it to PATH. I
am checking to see if you had any previous installation of
storm and invoking the storm command from previous
installation.

Can you also check zookeeper logs .

-Harsha



On Tue, Sep 2, 2014, at 03:39 AM, Benjamin SOULAS wrote:

Hi everyone,

I followed your instructions for installing a zookeeper server,
i downloaded it on the website, extract the tar file somewhere
in a machine on my cluster, i made those modifications in my
zoo.cfg :


# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/home/bsoulas/zookeeper/zookeeper-3.4.6/data/

# the port at which the clients will connect

clientPort=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

#
[1]http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#
sc_maintenance

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1


In the log4j.properties, i uncommented the line for the log
file :

# Example with rolling log file

log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE


Then i went to my storm.yaml (located here in my case, because
i took the source version) :

/home/bsoulas/incubator-storm-master/storm-dist/binary/target/a
pache-storm-0.9.3-ben/apache-storm-0.9.3-ben/conf


This file contain this configuration :

### These MUST be filled in for a storm configuration

 storm.zookeeper.servers:

 - "paradent-4"

# - "paradent-47"

# - "paradent-48"

#

 nimbus.host: "paradent-4"

#

#

# # These may optionally be filled in:

#

## List of custom serializations

# topology.kryo.register:

# - org.mycompany.MyType

# - org.mycompany.MyType2: org.mycompany.MyType2Serializer

#

## List of custom kryo decorators

# topology.kryo.decorators:

# - org.mycompany.MyDecorator

#

## Locations of the drpc servers

# drpc.servers:

# - "server1"

# - "server2"

## Metrics Consumers

# topology.metrics.consumer.register:

#   - class: "backtype.storm.metric.LoggingMetricsConsumer"

# parallelism.hint: 1

#   - class: "org.mycompany.MyMetricsConsumer"

# parallelism.hint: 1

# argument:

#   - endpoint: "[2]metrics-collector.mycompany.org"

 dev.zookeeper.path:
"paradent-4.rennes.grid5000.fr:~/home/bsoulas/zookeeper/zookeep
er-3.4.6/"

 storm.zookeeper.port: 2181

To launch storm on the cluster, i launch it thanks to storm
nimbus (on a machine named paradent-4), then my zookeeper
Server sh zkServer.sh start (on paradent-4 again)(which create
a zookeeper_server.pid where the pid of the zookeeper is
written, i know it's obvious ...>_< ).

After i launch my storm ui for having a visual of my storm app
(on paradent-4). Until now, everything work fine. Now, the
logical way implies i launch my supervisor, on a different
machine (here paradent-39) thanks to storm supervisor, it is
launched but once again, 3 or 4 seconds after it's down.

So i watched the supervisor.log located :

/home/bsoulas/incubator-storm-master/storm-dist/binary/target/a
pache-storm-0.9.3-ben/apache-storm-0.9.3-ben/logs


And here appear a tricky error :

2014-09-02 09:31:37 o.a.c.f.i.CuratorFrameworkImpl [INFO]
Starting

2014-09-02 09:31:37 o.a.z.ZooKeeper [INFO] Initiating client
connection, connectString=paradent-4:2181 sessionTimeout=2
watcher=org.apache.curator.ConnectionState@220df4c8

2014-09-02 09:31:37 o.a.z.ClientCnxn [INFO] Opening socket
connection to server
[3]paradent-4.rennes.grid5000.fr/172.16.97.4:2181. Will not
attempt to authenticate using SASL (unknown error)

2014-09-02 09:31:37 o.a.z.ClientCnxn [INFO] Socket connection
established to
[4]paradent-4.rennes.grid5000.fr/172.16.97.4:2181, initiating
session

2014-09-02 09:31:37 o

Re: Error on Supervisor start

2014-09-02 Thread Harsha
If possible can you share your storm.yaml. Incase if you are
upgrading the storm from previous installations I recommend
you to delete storm-local , zookeeper data.dir  and start the
storm daemons again.

-Harsha





On Tue, Sep 2, 2014, at 08:09 AM, Telles Nobrega wrote:

No, it still doesn't start, but there is no exception thrown.



On Tue, Sep 2, 2014 at 12:00 PM, Harsha <[1]st...@harsha.io>
wrote:

Hi Telles,
I haven't used zeromq or jzmq before sorry I can't help you
there.
"so I ran the command by hand and no exceptions were thrown
this time"
So everything looks good now?
-Harsha

On Tue, Sep 2, 2014, at 07:32 AM, Telles Nobrega wrote:

Hi Harsha, so I ran the command by hand and no exceptions were
thrown this time. There was a unable to delete file exception
before, but I don't think that is preventing the worker to
start.



On Mon, Sep 1, 2014 at 1:41 PM, Telles Nobrega
<[2]tellesnobr...@gmail.com> wrote:

One possible problem, just thinking. When I installed zeromq
and jzmq I deleted the folders afterwards, is that a problem?
Do they need to be there or just to compile and install?



On Mon, Sep 1, 2014 at 1:22 PM, Telles Nobrega
<[3]tellesnobr...@gmail.com> wrote:

Hi Harsha,

/usr/local/storm belongs to storm user. I've ran into the
problem before installing 0.8.2 but I can't remember how to
solve it. I will try to start the supervisor manually and see
what happens.



On Mon, Sep 1, 2014 at 1:06 PM, Harsha <[4]st...@harsha.io>
wrote:


Hi Telles,
 Can you check if the storm user has permissions for
/usr/local/storm. Assuming that you installed storm under
/usr/local/storm and trying to run the supervisor daemon as
user storm. Storm creates a dir "storm-local" and "logs" under
STORM_HOME for storing metadata and logs.  Before using
supervisord to start storm daemons it would helpful for you
test out running them manually.
-Harsha

On Mon, Sep 1, 2014, at 08:01 AM, Telles Nobrega wrote:

Hi, I installed a storm cluster in local vms that run ubuntu,
following the
tutorial [5]http://www.michael-noll.com/tutorials/running-multi
-node-storm-cluster/#configure-storm but i install storm-9.1

The supervisors were not starting and I ran the command
manually and got this error.

2014-09-01 14:56:16 b.s.d.worker [ERROR] Error on
initialization of server mk-worker
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native
Method) ~[na:1.7.0_51]
at java.io.File.createNewFile(File.java:1006)
~[na:1.7.0_51]
at backtype.storm.util$touch.invoke(util.clj:493)
~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at
backtype.storm.daemon.worker$eval4413$exec_fn__1102__auto44
14.invoke(worker.clj:352) ~[na:0.9.1-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:185)
[clojure-1.4.0.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151)
[clojure-1.4.0.jar:na]
at clojure.core$apply.invoke(core.clj:601)
~[clojure-1.4.0.jar:na]
at
backtype.storm.daemon.worker$eval4413$mk_worker__4469.doInvoke(
worker.clj:344) [na:0.9.1-incubating]
at clojure.lang.RestFn.invoke(RestFn.java:512)
[clojure-1.4.0.jar:na]
at
backtype.storm.daemon.worker$_main.invoke(worker.clj:454)
[na:0.9.1-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:172)
[clojure-1.4.0.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151)
[clojure-1.4.0.jar:na]
at backtype.storm.daemon.worker.main(Unknown Source)
[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
2014-09-01 14:56:16 b.s.util [INFO] Halting process: ("Error on
initialization")


Have anyone seen this?

Thanks

--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG





--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG




--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG




--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG





--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG

References

1. mailto:st...@harsha.io
2. mailto:tellesnobr...@gmail.com
3. mailto:tellesnobr...@gmail.com
4. mailto:st...@harsha.io
5. 
http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/#configure-storm


Re: Error on Supervisor start

2014-09-02 Thread Harsha
Hi Telles,

I haven't used zeromq or jzmq before sorry I can't help you
there.

"so I ran the command by hand and no exceptions were thrown
this time"

So everything looks good now?

-Harsha



On Tue, Sep 2, 2014, at 07:32 AM, Telles Nobrega wrote:

Hi Harsha, so I ran the command by hand and no exceptions were
thrown this time. There was a unable to delete file exception
before, but I don't think that is preventing the worker to
start.



On Mon, Sep 1, 2014 at 1:41 PM, Telles Nobrega
<[1]tellesnobr...@gmail.com> wrote:

One possible problem, just thinking. When I installed zeromq
and jzmq I deleted the folders afterwards, is that a problem?
Do they need to be there or just to compile and install?



On Mon, Sep 1, 2014 at 1:22 PM, Telles Nobrega
<[2]tellesnobr...@gmail.com> wrote:

Hi Harsha,

/usr/local/storm belongs to storm user. I've ran into the
problem before installing 0.8.2 but I can't remember how to
solve it. I will try to start the supervisor manually and see
what happens.



On Mon, Sep 1, 2014 at 1:06 PM, Harsha <[3]st...@harsha.io>
wrote:


Hi Telles,
 Can you check if the storm user has permissions for
/usr/local/storm. Assuming that you installed storm under
/usr/local/storm and trying to run the supervisor daemon as
user storm. Storm creates a dir "storm-local" and "logs" under
STORM_HOME for storing metadata and logs.  Before using
supervisord to start storm daemons it would helpful for you
test out running them manually.
-Harsha

On Mon, Sep 1, 2014, at 08:01 AM, Telles Nobrega wrote:

Hi, I installed a storm cluster in local vms that run ubuntu,
following the
tutorial [4]http://www.michael-noll.com/tutorials/running-multi
-node-storm-cluster/#configure-storm but i install storm-9.1

The supervisors were not starting and I ran the command
manually and got this error.

2014-09-01 14:56:16 b.s.d.worker [ERROR] Error on
initialization of server mk-worker
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native
Method) ~[na:1.7.0_51]
at java.io.File.createNewFile(File.java:1006)
~[na:1.7.0_51]
at backtype.storm.util$touch.invoke(util.clj:493)
~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at
backtype.storm.daemon.worker$eval4413$exec_fn__1102__auto44
14.invoke(worker.clj:352) ~[na:0.9.1-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:185)
[clojure-1.4.0.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151)
[clojure-1.4.0.jar:na]
at clojure.core$apply.invoke(core.clj:601)
~[clojure-1.4.0.jar:na]
at
backtype.storm.daemon.worker$eval4413$mk_worker__4469.doInvoke(
worker.clj:344) [na:0.9.1-incubating]
at clojure.lang.RestFn.invoke(RestFn.java:512)
[clojure-1.4.0.jar:na]
at
backtype.storm.daemon.worker$_main.invoke(worker.clj:454)
[na:0.9.1-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:172)
[clojure-1.4.0.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151)
[clojure-1.4.0.jar:na]
at backtype.storm.daemon.worker.main(Unknown Source)
[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
2014-09-01 14:56:16 b.s.util [INFO] Halting process: ("Error on
initialization")


Have anyone seen this?

Thanks

--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG





--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG




--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG




--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG

References

1. mailto:tellesnobr...@gmail.com
2. mailto:tellesnobr...@gmail.com
3. mailto:st...@harsha.io
4. 
http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/#configure-storm


Re: Error on Supervisor start

2014-09-01 Thread Harsha


Hi Telles,

 Can you check if the storm user has permissions for
/usr/local/storm. Assuming that you installed storm under
/usr/local/storm and trying to run the supervisor daemon as
user storm. Storm creates a dir "storm-local" and "logs" under
STORM_HOME for storing metadata and logs.  Before using
supervisord to start storm daemons it would helpful for you
test out running them manually.

-Harsha



On Mon, Sep 1, 2014, at 08:01 AM, Telles Nobrega wrote:

Hi, I installed a storm cluster in local vms that run ubuntu,
following the
tutorial [1]http://www.michael-noll.com/tutorials/running-multi
-node-storm-cluster/#configure-storm but i install storm-9.1

The supervisors were not starting and I ran the command
manually and got this error.

2014-09-01 14:56:16 b.s.d.worker [ERROR] Error on
initialization of server mk-worker
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native
Method) ~[na:1.7.0_51]
at java.io.File.createNewFile(File.java:1006)
~[na:1.7.0_51]
at backtype.storm.util$touch.invoke(util.clj:493)
~[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
at
backtype.storm.daemon.worker$eval4413$exec_fn__1102__auto44
14.invoke(worker.clj:352) ~[na:0.9.1-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:185)
[clojure-1.4.0.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151)
[clojure-1.4.0.jar:na]
at clojure.core$apply.invoke(core.clj:601)
~[clojure-1.4.0.jar:na]
at
backtype.storm.daemon.worker$eval4413$mk_worker__4469.doInvoke(
worker.clj:344) [na:0.9.1-incubating]
at clojure.lang.RestFn.invoke(RestFn.java:512)
[clojure-1.4.0.jar:na]
at
backtype.storm.daemon.worker$_main.invoke(worker.clj:454)
[na:0.9.1-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:172)
[clojure-1.4.0.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151)
[clojure-1.4.0.jar:na]
at backtype.storm.daemon.worker.main(Unknown Source)
[storm-core-0.9.1-incubating.jar:0.9.1-incubating]
2014-09-01 14:56:16 b.s.util [INFO] Halting process: ("Error on
initialization")


Have anyone seen this?

Thanks

--
--
Telles Mota Vidal Nobrega
M.sc. Candidate at UFCG
B.sc. in Computer Science at UFCG
Software Engineer at OpenStack Project - HP/LSD-UFCG

References

1. 
http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/#configure-storm


Re: Data validation

2014-08-29 Thread Harsha
Kushan,

   Why not use cassandra counter to implement
this [1]http://www.datastax.com/documentation/cql/3.0/cql/cql_u
sing/use_counter_t.html.

you can create a counter field in  a table in cassandra and let
the storm bolts update it.

I don't  have much knowledge on internal representation of
cassandra counters and how accurate they will be.

-Harsha





On Fri, Aug 29, 2014, at 12:15 PM, Kushan Maskey wrote:

I have a batch process that runs more than 100K records of data
and loads into Cassandra. I am having hard time validating the
exact number of data that gets stored into C*. Now C* has more
than 20Million records and when I do Select COUNT(1) FROM
TABLE, I get Request did not complete within rpc_timeout. I
tried to increate teh rpc_timeout but didnt help. The load
process completes successfully without having any errors in the
log. So I assumed that storm and kafka is set up correctly.

I have 5 bolts and now I am at a point to add counter feature
to the bolts to count how many messages successfully inserted.
I tried to add a static counter field. But this will not work
as its in clustered environment and static fields are not good.
Can anyone suggest me a better way to validate the number of
records that gets inserted into C*? This is one of the initail
requirements to make sure that x number of records we processed
thru batch and same number of records got inserted into C*.

I also tried to set StormConfig with new property like

stormConfig.put("Records_add_counter", 0);
Then I wanted to increment the counter by one everytie the
message comes a particularbolt but I get
UnsupportedOperationException. I am thinking you cannot update
the the value of a property at t his point.

Any help will be appreciated. Thanks.

--
Kushan Maskey
817.403.7500

References

1. 
http://www.datastax.com/documentation/cql/3.0/cql/cql_using/use_counter_t.html


Re: Supervisor always down 3s after execution

2014-08-29 Thread Harsha


Hi Benjamin,

Storm cluster needs a zookeeper quorum to function.
ExclamationTopology accepts command line params to deploy on a
storm cluster. If you don't pass any arguments it will use
LocalCluster(a simulated local cluster) to deploy.

I recommend you to go through
[1]http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html

for setting up zookeeper. Here is an excellent write up on
storm cluster setup along with
zookeeper [2]http://www.michael-noll.com/tutorials/running-mult
i-node-storm-cluster/.

Hope that helps.

-Harsha



On Fri, Aug 29, 2014, at 05:34 AM, Benjamin SOULAS wrote:

Hello everyone, i have a problem during implementing storm on a
cluster (Grid 5000 if anyone knows). I took the
inubator-storm-master from the github branch with the sources,
i succeeded to create my own release (no code modification,
just for maven errors that were disturbing...)

It's working fine on my own laptop in local, i modified the
ExclamationTopology in adding 40 more bolts. I also modified
this Topology to allow 50 workers in the configuration.

Now on a cluster, when I try to do the same thing, supervisors
are down just 3s after their execution. Nimbus is ok,
dev-zookeeeper too, storm ui too.

I read somewhere on the apache website you need to implement a
real zookeeper (not the one in storm).

Please, does someone knows a good tutorial explaining how
running a zookeeper server on a cluster for storm?

I hope I am clear ...

Kind regards.

Benjamin SOULAS

References

Visible links
1. http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html
2. http://www.michael-noll.com/tutorials/running-multi-node-storm-cluster/

Hidden links:
4. http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html


Re: Storm not processing topology without logs

2014-08-28 Thread Harsha
If possible can you post some logs from supervisor.log.
Interested in looking at the log when your supervisor starts.

-Harsha





On Thu, Aug 28, 2014, at 07:29 AM, Vikas Agarwal wrote:

Yes, I am through it. I have killed the processes created by
main supervisor processes for 6700 and 6701 ports and then
started process for one of these ports.

After that I faced issues due to multiple versions of same
library in storm lib e.g. netty and servlet-api

After that I faced this stack overflow issue. Now, I am even
able to fix it. Multiple slf4j-log4j implementations was the
issue behind stack overflow.

Now, I am back to the same state where the process just don't
start. Now running the worker command manually is even not
showing any log except this:

JMXetricAgent instrumented JVM, see
[1]https://github.com/ganglia/jmxetric
Aug 28, 2014 10:28:39 AM info.ganglia.gmetric4j.GMonitor start
INFO: Setting up 1 samplers

And then process get killed.



On Thu, Aug 28, 2014 at 7:22 PM, Harsha <[2]st...@harsha.io>
wrote:

Vikas,
Are you able to get past this error "Running the
command manually on console causes "Address already in use"
error for supervisor ports (6700,6701)". Did you check if there
are any processes running on  that port.
-Harsha


On Thu, Aug 28, 2014, at 01:58 AM, Vikas Agarwal wrote:

I am getting following error when trying to run the command for
worker directly on console


Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread
"main-SendThread(hdp.ambari:2181)"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-2"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-12-"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-10-"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-8-"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-14-"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread
"Thread-14-feed-stream-SendThread(localhost:2181)"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread
"Thread-14-feed-stream-SendThread(localhost:2181)"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread
"Thread-14-feed-stream-SendThread(hdp.ambari:2181)"


As one of the possible bug situations, I looked for multiple
netty jars as suggested in other mail thread, it didn't work.
Can anyone help me out where should I look next to resolve the
issue.




On Tue, Aug 26, 2014 at 2:20 PM, Vikas Agarwal
<[3]vi...@infoobjects.com> wrote:

However, now my topology is failing to start worker process
again. :(

This time is not showing me any good clue to resolve it.
Running the command manually on console causes "Address already
in use" error for supervisor ports (6700,6701). So, it is not
letting me move forward to see what actually the error is while
running the worker.



On Mon, Aug 25, 2014 at 9:00 PM, Vikas Agarwal
<[4]vi...@infoobjects.com> wrote:

Yes, I was able to see the topology in Storm UI and nothing was
logged into worker logs. However, as I mentioned, I am able to
resolve it by finding an hint in supervisor.log file this time.



On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham
<[5]itsmegeo...@gmail.com> wrote:

Are you able to see the topology in storm UI or with storm list
command ?? And worker mentioned in the UI doesn't have any log
??
  __

From: Vikas Agarwal
Sent: 25-08-2014 PM 05:25
To: [6]user@storm.incubator.apache.org
Subject: Storm not processing topology without logs


Hi,

I have started to explore the Storm for distributed processing
for our use case which we were earlier fulfilling by JMS based
MQ system. Topology worked after some efforts. It has one spout
(KafkaSpout from kafka-storm project) and 3 bolts. First bolt
sets context for other two bolts which in turn do some
processing on the tuples and persist the analyzed results in
some DB (Mongo, Solr, HBase etc).

Recently the topology stopped working. I am able to submit the
topology and it does not throw any error in submitting the
topology, however, nimbus.log or worker-6701.log files are not
showing any progress and eventually topology does not consume
any message. I don't have doubt on KafkaSpout because if it was
the culprit, at least some initialization logs of spout and
bolts should have been there in nimbus.log or worker-.log.
Isn't it?

Here is the snippet of nimbus.log after uploading the jar to
cluster

Uploading file from client to
/hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe1
3c70

Re: Storm not processing topology without logs

2014-08-28 Thread Harsha
Vikas,

Are you able to get past this error "Running the
command manually on console causes "Address already in use"
error for supervisor ports (6700,6701)". Did you check if there
are any processes running on  that port.

-Harsha





On Thu, Aug 28, 2014, at 01:58 AM, Vikas Agarwal wrote:

I am getting following error when trying to run the command for
worker directly on console


Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread
"main-SendThread(hdp.ambari:2181)"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-2"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-12-"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-10-"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-8-"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread "Thread-14-"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread
"Thread-14-feed-stream-SendThread(localhost:2181)"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread
"Thread-14-feed-stream-SendThread(localhost:2181)"

Exception: java.lang.StackOverflowError thrown from the
UncaughtExceptionHandler in thread
"Thread-14-feed-stream-SendThread(hdp.ambari:2181)"


As one of the possible bug situations, I looked for multiple
netty jars as suggested in other mail thread, it didn't work.
Can anyone help me out where should I look next to resolve the
issue.




On Tue, Aug 26, 2014 at 2:20 PM, Vikas Agarwal
<[1]vi...@infoobjects.com> wrote:

However, now my topology is failing to start worker process
again. :(

This time is not showing me any good clue to resolve it.
Running the command manually on console causes "Address already
in use" error for supervisor ports (6700,6701). So, it is not
letting me move forward to see what actually the error is while
running the worker.



On Mon, Aug 25, 2014 at 9:00 PM, Vikas Agarwal
<[2]vi...@infoobjects.com> wrote:

Yes, I was able to see the topology in Storm UI and nothing was
logged into worker logs. However, as I mentioned, I am able to
resolve it by finding an hint in supervisor.log file this time.



On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham
<[3]itsmegeo...@gmail.com> wrote:

Are you able to see the topology in storm UI or with storm list
command ?? And worker mentioned in the UI doesn't have any log
??
  __

From: Vikas Agarwal
Sent: 25-08-2014 PM 05:25
To: [4]user@storm.incubator.apache.org
Subject: Storm not processing topology without logs


Hi,

I have started to explore the Storm for distributed processing
for our use case which we were earlier fulfilling by JMS based
MQ system. Topology worked after some efforts. It has one spout
(KafkaSpout from kafka-storm project) and 3 bolts. First bolt
sets context for other two bolts which in turn do some
processing on the tuples and persist the analyzed results in
some DB (Mongo, Solr, HBase etc).

Recently the topology stopped working. I am able to submit the
topology and it does not throw any error in submitting the
topology, however, nimbus.log or worker-6701.log files are not
showing any progress and eventually topology does not consume
any message. I don't have doubt on KafkaSpout because if it was
the culprit, at least some initialization logs of spout and
bolts should have been there in nimbus.log or worker-.log.
Isn't it?

Here is the snippet of nimbus.log after uploading the jar to
cluster

Uploading file from client to
/hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe1
3c706b2ab.jar
2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file
from client:
/hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe1
3c706b2ab.jar
2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology
submission for aleads with conf
{"topology.max.task.parallelism" nil,
"topology.acker.executors" nil, "topology.kryo.register" nil,
"topology.kryo.decorators" (), "[5]topology.name" "aleads",
"[6]storm.id" "aleads-3-1408964869", "modelId" "ut",
"topology.workers" 1, "topology.debug" true}
2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads:
aleads-3-1408964869
2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots:
(["e56c2cc7-d35a-4355-9906-506618ff70c5" 6701]
["e56c2cc7-d35a-4355-9906-506618ff70c5" 6700])
2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment
for topology id aleads-3-1408964869:
#backtype.storm.daemon.common.Assignment

Re: supervisor not listening on port 6700?

2014-08-27 Thread Harsha
Taylor,

   I noticed its not there in the master but what about the
released package. If users are installing the apache released
package they might face this issue. Its has two netty jars in
the lib dir.

Thanks,

Harsha







On Wed, Aug 27, 2014, at 10:36 AM, P. Taylor Goetz wrote:

This has been resolved in the master branch.



-Taylor



On Aug 27, 2014, at 12:10 PM, Harsha <[1]st...@harsha.io>
wrote:



Looks like a build/release issue with storm 0.9.2. we might
need to update the package there shouldn't be two versions of
netty in lib dir.
Can you please file a JIRA for this.
Thanks,
Harsha


On Wed, Aug 27, 2014, at 08:54 AM, Naga Vij wrote:

Got it ; Thank you!

BTW, I have worked around this issue thus ...

> Noticed two netty jars in lib dir - netty-3.2.2.Final.jar and
netty-3.6.3.Final.jar
> Eliminated both of them, and placed netty-3.9.4.Final.jar

The core & worker processes are steady now.




On Wed, Aug 27, 2014 at 8:08 AM, Harsha <[2]st...@harsha.io>
wrote:

You need to do these following steps

git clone [3]https://github.com/apache/incubator-storm.git

git checkout v0.9.2-incubating -b 0.9.2-incubating


On Wed, Aug 27, 2014, at 08:03 AM, Naga Vij wrote:

Is the Git Url right?  I just tried and got ...



> git clone
[4]https://github.com/apache/incubator-storm/tree/v0.9.2-incuba
ting

Cloning into 'v0.9.2-incubating'...

fatal: repository
'[5]https://github.com/apache/incubator-storm/tree/v0.9.2-incub
ating/' not found



On Wed, Aug 27, 2014 at 7:41 AM, Harsha <[6]st...@harsha.io>
wrote:


  Storm 0.9.2 is tag under github
repo [7]https://github.com/apache/incubator-storm/tree/v0.9.2-i
ncubating.
-Harsha
On Tue, Aug 26, 2014, at 10:26 PM, Naga Vij wrote:

Does anyone know what the git branch name is for 0.9.2 ?



On Tue, Aug 26, 2014 at 10:24 PM, Naga Vij
<[8]nvbuc...@gmail.com> wrote:

When it gets into `still hasn't started` state, I have noticed
this in UI -

java.lang.RuntimeException: java.net.ConnectException:
Connection refused at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(Disrup
torQueue.java:128) at backtype.storm.utils.DisruptorQueue.

and am wondering how to overcome this.



On Tue, Aug 26, 2014 at 10:04 PM, Naga Vij
<[9]nvbuc...@gmail.com> wrote:

I left supervisor running with the `still hasn't started` state
on one window, and tried starting the worker on another
window.  That triggered an attempt to start another worker
(with another distinct id) in the first window (the supervisor
window) which in turn went into the `still hasn't started`
state.



On Tue, Aug 26, 2014 at 7:50 PM, Vikas Agarwal
<[10]vi...@infoobjects.com> wrote:

I am even having the almost same versions of storm (0.9.1) and
kafka. And my topologies were also facing the same issue. When
I ran the worker command directly, I came to know that somehow
hostname was wrong in the configuration passed to the workers.
So, I fixed that in storm config and my topology worked after
that. However, now again it has stuck with same "still hasn't
started" error message and in my case now the error in running
the worker command is "Address already in use" for supervisor
port.

So, what is the error when you directly run the worker command?



On Tue, Aug 26, 2014 at 9:39 PM, Naga Vij
<[11]nvbuc...@gmail.com> wrote:

I fail to understand why that should happen, as testing with
LocalCluster goes through fine.

I did a clean fresh start to figure out what could be
happening, and here are my observations -

- fresh clean start: cleanup in zk (rmr /storm), and /bin/rm
-fr {storm's tmp dir}
- used local pseudo cluster on my mac
- nimbus process started fine
- supervisor process started fine
- ensured toplogy works fine with (the embedded) LocalCluster
- topology was then submitted to local pseudo cluster on my mac
; that's when I see ``still hasn't started`` messages in
supervisor terminal window

When submitting topology to local pseudo cluster, had to add
jars to overcome these ...

Caused by: java.lang.ClassNotFoundException:
storm.kafka.BrokerHosts
Caused by: java.lang.ClassNotFoundException:
kafka.api.OffsetRequest
Caused by: java.lang.ClassNotFoundException: scala.Product

Above were overcome by adding these to lib dir -

storm-kafka-0.9.2-incubating.jar
kafka_2.10-0.8.1.1.jar
scala-library-2.10.1.jar

I have tried the command in log as well ; hasn't helped.

What am I missing?


On Mon, Aug 25, 2014 at 11:41 PM, Vikas Agarwal
<[12]vi...@infoobjects.com> wrote:

>> dd7c588e-5fa0-4c4b-96ed-de0d420001e9 still hasn't started<<

This is the clue. One of your topology is failing to start. You
must see the worker command before these logs in the same log
file. Just try to run those directly on console and it would
show the exact error.



On Tue, Aug 26, 2014 at 11:45 AM, Naga Vij
<[13]nvbuc...@gmail.com> wrote:

Hello,

I am tr

Re: supervisor not listening on port 6700?

2014-08-27 Thread Harsha
Looks like a build/release issue with storm 0.9.2. we might
need to update the package there shouldn't be two versions of
netty in lib dir.

Can you please file a JIRA for this.

Thanks,

Harsha





On Wed, Aug 27, 2014, at 08:54 AM, Naga Vij wrote:

Got it ; Thank you!

BTW, I have worked around this issue thus ...

> Noticed two netty jars in lib dir - netty-3.2.2.Final.jar and
netty-3.6.3.Final.jar
> Eliminated both of them, and placed netty-3.9.4.Final.jar

The core & worker processes are steady now.




On Wed, Aug 27, 2014 at 8:08 AM, Harsha <[1]st...@harsha.io>
wrote:

You need to do these following steps

git clone [2]https://github.com/apache/incubator-storm.git

git checkout v0.9.2-incubating -b 0.9.2-incubating


On Wed, Aug 27, 2014, at 08:03 AM, Naga Vij wrote:

Is the Git Url right?  I just tried and got ...



> git clone
[3]https://github.com/apache/incubator-storm/tree/v0.9.2-incuba
ting

Cloning into 'v0.9.2-incubating'...

fatal: repository
'[4]https://github.com/apache/incubator-storm/tree/v0.9.2-incub
ating/' not found



On Wed, Aug 27, 2014 at 7:41 AM, Harsha <[5]st...@harsha.io>
wrote:


  Storm 0.9.2 is tag under github
repo [6]https://github.com/apache/incubator-storm/tree/v0.9.2-i
ncubating.
-Harsha
On Tue, Aug 26, 2014, at 10:26 PM, Naga Vij wrote:

Does anyone know what the git branch name is for 0.9.2 ?



On Tue, Aug 26, 2014 at 10:24 PM, Naga Vij
<[7]nvbuc...@gmail.com> wrote:

When it gets into `still hasn't started` state, I have noticed
this in UI -

java.lang.RuntimeException: java.net.ConnectException:
Connection refused at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(Disrup
torQueue.java:128) at backtype.storm.utils.DisruptorQueue.

and am wondering how to overcome this.



On Tue, Aug 26, 2014 at 10:04 PM, Naga Vij
<[8]nvbuc...@gmail.com> wrote:

I left supervisor running with the `still hasn't started` state
on one window, and tried starting the worker on another
window.  That triggered an attempt to start another worker
(with another distinct id) in the first window (the supervisor
window) which in turn went into the `still hasn't started`
state.



On Tue, Aug 26, 2014 at 7:50 PM, Vikas Agarwal
<[9]vi...@infoobjects.com> wrote:

I am even having the almost same versions of storm (0.9.1) and
kafka. And my topologies were also facing the same issue. When
I ran the worker command directly, I came to know that somehow
hostname was wrong in the configuration passed to the workers.
So, I fixed that in storm config and my topology worked after
that. However, now again it has stuck with same "still hasn't
started" error message and in my case now the error in running
the worker command is "Address already in use" for supervisor
port.

So, what is the error when you directly run the worker command?



On Tue, Aug 26, 2014 at 9:39 PM, Naga Vij
<[10]nvbuc...@gmail.com> wrote:

I fail to understand why that should happen, as testing with
LocalCluster goes through fine.

I did a clean fresh start to figure out what could be
happening, and here are my observations -

- fresh clean start: cleanup in zk (rmr /storm), and /bin/rm
-fr {storm's tmp dir}
- used local pseudo cluster on my mac
- nimbus process started fine
- supervisor process started fine
- ensured toplogy works fine with (the embedded) LocalCluster
- topology was then submitted to local pseudo cluster on my mac
; that's when I see ``still hasn't started`` messages in
supervisor terminal window

When submitting topology to local pseudo cluster, had to add
jars to overcome these ...

Caused by: java.lang.ClassNotFoundException:
storm.kafka.BrokerHosts
Caused by: java.lang.ClassNotFoundException:
kafka.api.OffsetRequest
Caused by: java.lang.ClassNotFoundException: scala.Product

Above were overcome by adding these to lib dir -

storm-kafka-0.9.2-incubating.jar
kafka_2.10-0.8.1.1.jar
scala-library-2.10.1.jar

I have tried the command in log as well ; hasn't helped.

What am I missing?


On Mon, Aug 25, 2014 at 11:41 PM, Vikas Agarwal
<[11]vi...@infoobjects.com> wrote:

>> dd7c588e-5fa0-4c4b-96ed-de0d420001e9 still hasn't started<<

This is the clue. One of your topology is failing to start. You
must see the worker command before these logs in the same log
file. Just try to run those directly on console and it would
show the exact error.



On Tue, Aug 26, 2014 at 11:45 AM, Naga Vij
<[12]nvbuc...@gmail.com> wrote:

Hello,

I am trying out Storm 0.9.2-incubating pseudo cluster (on just
one box) on these two systems -

> cat /etc/redhat-release
CentOS release 6.3 (Final)

and

> sw_vers
ProductName:Mac OS X
ProductVersion:10.9.2
BuildVersion:13C64

After starting supervisor, I notice it is not listening on the
configured port (6700) -

> nc -zv localhost 6700
nc: connectx to localhost port 6700 (tcp) failed: Connection
refu

Re: supervisor not listening on port 6700?

2014-08-27 Thread Harsha
You need to do these following steps

git clone [1]https://github.com/apache/incubator-storm.git

git checkout v0.9.2-incubating -b 0.9.2-incubating





On Wed, Aug 27, 2014, at 08:03 AM, Naga Vij wrote:

Is the Git Url right?  I just tried and got ...



> git clone
[2]https://github.com/apache/incubator-storm/tree/v0.9.2-incuba
ting

Cloning into 'v0.9.2-incubating'...

fatal: repository
'[3]https://github.com/apache/incubator-storm/tree/v0.9.2-incub
ating/' not found



On Wed, Aug 27, 2014 at 7:41 AM, Harsha <[4]st...@harsha.io>
wrote:


  Storm 0.9.2 is tag under github
repo [5]https://github.com/apache/incubator-storm/tree/v0.9.2-i
ncubating.
-Harsha
On Tue, Aug 26, 2014, at 10:26 PM, Naga Vij wrote:

Does anyone know what the git branch name is for 0.9.2 ?



On Tue, Aug 26, 2014 at 10:24 PM, Naga Vij
<[6]nvbuc...@gmail.com> wrote:

When it gets into `still hasn't started` state, I have noticed
this in UI -

java.lang.RuntimeException: java.net.ConnectException:
Connection refused at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(Disrup
torQueue.java:128) at backtype.storm.utils.DisruptorQueue.

and am wondering how to overcome this.



On Tue, Aug 26, 2014 at 10:04 PM, Naga Vij
<[7]nvbuc...@gmail.com> wrote:

I left supervisor running with the `still hasn't started` state
on one window, and tried starting the worker on another
window.  That triggered an attempt to start another worker
(with another distinct id) in the first window (the supervisor
window) which in turn went into the `still hasn't started`
state.



On Tue, Aug 26, 2014 at 7:50 PM, Vikas Agarwal
<[8]vi...@infoobjects.com> wrote:

I am even having the almost same versions of storm (0.9.1) and
kafka. And my topologies were also facing the same issue. When
I ran the worker command directly, I came to know that somehow
hostname was wrong in the configuration passed to the workers.
So, I fixed that in storm config and my topology worked after
that. However, now again it has stuck with same "still hasn't
started" error message and in my case now the error in running
the worker command is "Address already in use" for supervisor
port.

So, what is the error when you directly run the worker command?



On Tue, Aug 26, 2014 at 9:39 PM, Naga Vij
<[9]nvbuc...@gmail.com> wrote:

I fail to understand why that should happen, as testing with
LocalCluster goes through fine.

I did a clean fresh start to figure out what could be
happening, and here are my observations -

- fresh clean start: cleanup in zk (rmr /storm), and /bin/rm
-fr {storm's tmp dir}
- used local pseudo cluster on my mac
- nimbus process started fine
- supervisor process started fine
- ensured toplogy works fine with (the embedded) LocalCluster
- topology was then submitted to local pseudo cluster on my mac
; that's when I see ``still hasn't started`` messages in
supervisor terminal window

When submitting topology to local pseudo cluster, had to add
jars to overcome these ...

Caused by: java.lang.ClassNotFoundException:
storm.kafka.BrokerHosts
Caused by: java.lang.ClassNotFoundException:
kafka.api.OffsetRequest
Caused by: java.lang.ClassNotFoundException: scala.Product

Above were overcome by adding these to lib dir -

storm-kafka-0.9.2-incubating.jar
kafka_2.10-0.8.1.1.jar
scala-library-2.10.1.jar

I have tried the command in log as well ; hasn't helped.

What am I missing?


On Mon, Aug 25, 2014 at 11:41 PM, Vikas Agarwal
<[10]vi...@infoobjects.com> wrote:

>> dd7c588e-5fa0-4c4b-96ed-de0d420001e9 still hasn't started<<

This is the clue. One of your topology is failing to start. You
must see the worker command before these logs in the same log
file. Just try to run those directly on console and it would
show the exact error.



On Tue, Aug 26, 2014 at 11:45 AM, Naga Vij
<[11]nvbuc...@gmail.com> wrote:

Hello,

I am trying out Storm 0.9.2-incubating pseudo cluster (on just
one box) on these two systems -

> cat /etc/redhat-release
CentOS release 6.3 (Final)

and

> sw_vers
ProductName:Mac OS X
ProductVersion:10.9.2
BuildVersion:13C64

After starting supervisor, I notice it is not listening on the
configured port (6700) -

> nc -zv localhost 6700
nc: connectx to localhost port 6700 (tcp) failed: Connection
refused

When I submit topology, I see this scrolling message in the
terminal window for supervisor -

23:11:44.532 [Thread-2] INFO  backtype.storm.daemon.supervisor
- dd7c588e-5fa0-4c4b-96ed-de0d420001e9 still hasn't started

I don't see any worker id in UI.  No error in logs.

Any idea what could be happening?

Thanks in advance.

Naga




--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[12]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
[13]+1 (408) 988-2000 Work
[14]+1 (408) 716-2726 Fax





--
Regards,
Vikas Agarwal
91 

Re: supervisor not listening on port 6700?

2014-08-27 Thread Harsha


  Storm 0.9.2 is tag under github
repo [1]https://github.com/apache/incubator-storm/tree/v0.9.2-i
ncubating.

-Harsha

On Tue, Aug 26, 2014, at 10:26 PM, Naga Vij wrote:

Does anyone know what the git branch name is for 0.9.2 ?



On Tue, Aug 26, 2014 at 10:24 PM, Naga Vij
<[2]nvbuc...@gmail.com> wrote:

When it gets into `still hasn't started` state, I have noticed
this in UI -

java.lang.RuntimeException: java.net.ConnectException:
Connection refused at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(Disrup
torQueue.java:128) at backtype.storm.utils.DisruptorQueue.

and am wondering how to overcome this.



On Tue, Aug 26, 2014 at 10:04 PM, Naga Vij
<[3]nvbuc...@gmail.com> wrote:

I left supervisor running with the `still hasn't started` state
on one window, and tried starting the worker on another
window.  That triggered an attempt to start another worker
(with another distinct id) in the first window (the supervisor
window) which in turn went into the `still hasn't started`
state.



On Tue, Aug 26, 2014 at 7:50 PM, Vikas Agarwal
<[4]vi...@infoobjects.com> wrote:

I am even having the almost same versions of storm (0.9.1) and
kafka. And my topologies were also facing the same issue. When
I ran the worker command directly, I came to know that somehow
hostname was wrong in the configuration passed to the workers.
So, I fixed that in storm config and my topology worked after
that. However, now again it has stuck with same "still hasn't
started" error message and in my case now the error in running
the worker command is "Address already in use" for supervisor
port.

So, what is the error when you directly run the worker command?



On Tue, Aug 26, 2014 at 9:39 PM, Naga Vij
<[5]nvbuc...@gmail.com> wrote:

I fail to understand why that should happen, as testing with
LocalCluster goes through fine.

I did a clean fresh start to figure out what could be
happening, and here are my observations -

- fresh clean start: cleanup in zk (rmr /storm), and /bin/rm
-fr {storm's tmp dir}
- used local pseudo cluster on my mac
- nimbus process started fine
- supervisor process started fine
- ensured toplogy works fine with (the embedded) LocalCluster
- topology was then submitted to local pseudo cluster on my mac
; that's when I see ``still hasn't started`` messages in
supervisor terminal window

When submitting topology to local pseudo cluster, had to add
jars to overcome these ...

Caused by: java.lang.ClassNotFoundException:
storm.kafka.BrokerHosts
Caused by: java.lang.ClassNotFoundException:
kafka.api.OffsetRequest
Caused by: java.lang.ClassNotFoundException: scala.Product

Above were overcome by adding these to lib dir -

storm-kafka-0.9.2-incubating.jar
kafka_2.10-0.8.1.1.jar
scala-library-2.10.1.jar

I have tried the command in log as well ; hasn't helped.

What am I missing?


On Mon, Aug 25, 2014 at 11:41 PM, Vikas Agarwal
<[6]vi...@infoobjects.com> wrote:

>> dd7c588e-5fa0-4c4b-96ed-de0d420001e9 still hasn't started<<

This is the clue. One of your topology is failing to start. You
must see the worker command before these logs in the same log
file. Just try to run those directly on console and it would
show the exact error.



On Tue, Aug 26, 2014 at 11:45 AM, Naga Vij
<[7]nvbuc...@gmail.com> wrote:

Hello,

I am trying out Storm 0.9.2-incubating pseudo cluster (on just
one box) on these two systems -

> cat /etc/redhat-release
CentOS release 6.3 (Final)

and

> sw_vers
ProductName:Mac OS X
ProductVersion:10.9.2
BuildVersion:13C64

After starting supervisor, I notice it is not listening on the
configured port (6700) -

> nc -zv localhost 6700
nc: connectx to localhost port 6700 (tcp) failed: Connection
refused

When I submit topology, I see this scrolling message in the
terminal window for supervisor -

23:11:44.532 [Thread-2] INFO  backtype.storm.daemon.supervisor
- dd7c588e-5fa0-4c4b-96ed-de0d420001e9 still hasn't started

I don't see any worker id in UI.  No error in logs.

Any idea what could be happening?

Thanks in advance.

Naga




--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[8]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
[9]+1 (408) 988-2000 Work
[10]+1 (408) 716-2726 Fax





--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[11]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
[12]+1 (408) 988-2000 Work
[13]+1 (408) 716-2726 Fax

References

1. https://github.com/apache/incubator-storm/tree/v0.9.2-incubating
2. mailto:nvbuc...@gmail.com
3. mailto:nvbuc...@gmail.com
4. mailto:vi...@infoobjects.com
5. mailto:nvbuc...@gmail.com
6. mailto:vi...@infoobjects.com
7. mailto:nvbuc...@gmail.com
8. http://www.infoobjects.com/
9. tel:%2B1%20%28408%29%20988-2000
  10. tel:%2B1%20%28408%29%20716-2726
  11. http://www.infoobjects.com/
  12. tel:%2B1%20%28408%29%20988-2000
  13. tel:%2B1%20%28408%29%20716-2726


Re: Location of last error details seen in storm UI

2014-08-25 Thread Harsha
current version of storm doesn't have a way to define storm log
dir. One way to do is to edit logback/cluster.xml under storm
installation.  Upcoming release will have a config option
storm.log.dir to redirect the logs from default dir.

-Harsha





On Mon, Aug 25, 2014, at 08:16 AM, Jason Kania wrote:

Thanks for that. I looked to find which property or
configuration parameter sets it but could not find it. Is there
such a parameter?

Thanks,

Jason
  __

From: Harsha 
To: user@storm.incubator.apache.org
Sent: Monday, August 25, 2014 11:10:17 AM
Subject: Re: Location of last error details seen in storm UI

Jason,
   Default is under your storm installation check for logs
dir.
-Harsha




On Mon, Aug 25, 2014, at 07:54 AM, Jason Kania wrote:

Thanks for the response.

Unfortunately, I have no /var/log/storm on my system. Where is
the path to these logs specified. I am guessing it is pointing
somewhere else by default.

Thanks,

Jason
  __

From: Vikas Agarwal 
To: user@storm.incubator.apache.org
Cc: Jason Kania 
Sent: Monday, August 25, 2014 10:34:00 AM
Subject: Re: Location of last error details seen in storm UI

Better would be to view log files under /var/log/storm. Any
issue with worker would be logged into
/var/log/storm/worker-6700.log
and /var/log/storm/worker-6701.log.




On Mon, Aug 25, 2014 at 8:00 PM, Vincent Russell
<[1]vincent.russ...@gmail.com> wrote:

Click on the link of the bolt/spout that is all the way on the
left side.



On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania
<[2]jason.ka...@ymail.com> wrote:

Hello,

I am trying to get more detail on an error that is being
displayed in the Storm UI under the Last Error column but
unfortunately, I am not seeing it captured anywhere else. Does
anyone know where this text could be seen? The problem is that
the error text is insufficient to diagnose the problem.

Thanks,

Jason





--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[3]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

References

1. mailto:vincent.russ...@gmail.com
2. mailto:jason.ka...@ymail.com
3. http://www.infoobjects.com/


Re: Location of last error details seen in storm UI

2014-08-25 Thread Harsha
Jason,

   Default is under your storm installation check for logs
dir.

-Harsha





On Mon, Aug 25, 2014, at 07:54 AM, Jason Kania wrote:

Thanks for the response.

Unfortunately, I have no /var/log/storm on my system. Where is
the path to these logs specified. I am guessing it is pointing
somewhere else by default.

Thanks,

Jason
  __

From: Vikas Agarwal 
To: user@storm.incubator.apache.org
Cc: Jason Kania 
Sent: Monday, August 25, 2014 10:34:00 AM
Subject: Re: Location of last error details seen in storm UI

Better would be to view log files under /var/log/storm. Any
issue with worker would be logged into
/var/log/storm/worker-6700.log
and /var/log/storm/worker-6701.log.




On Mon, Aug 25, 2014 at 8:00 PM, Vincent Russell
<[1]vincent.russ...@gmail.com> wrote:

Click on the link of the bolt/spout that is all the way on the
left side.



On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania
<[2]jason.ka...@ymail.com> wrote:

Hello,

I am trying to get more detail on an error that is being
displayed in the Storm UI under the Last Error column but
unfortunately, I am not seeing it captured anywhere else. Does
anyone know where this text could be seen? The problem is that
the error text is insufficient to diagnose the problem.

Thanks,

Jason





--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[3]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

References

1. mailto:vincent.russ...@gmail.com
2. mailto:jason.ka...@ymail.com
3. http://www.infoobjects.com/


Re: Create multiple supervisors on same node

2014-08-22 Thread Harsha
Tao,
   I tried the above steps I am able to run two supervisors on the
   same node. Did you check the logs for supervisor under storm2. If
   it didn't created a local_dir/storm dir than your supervisor
   daemon might not be running. check for logs if there are any
   errors.
-Harsha

On Fri, Aug 22, 2014, at 09:20 AM, Yu, Tao wrote:
> Thanks Harsha!
> 
> I tried your way, and here is what I have (major parts) in my storm.yaml:
> 
>  storm.local.dir: "/opt/grid/tao/storm/storm-0.8.2/local_data/storm"
>  supervisor.slots.ports:
> - 6700
> - 6701
> 
> 1) I created the 1st supervisor, and I can see specified  sub-folder
> "local_data/storm/supervisor" was created under "
> opt/grid/tao/storm/storm-0.8.2".  That's OK!
> 
> 2) then I copied the entire "storm-0.8.2" folder to a new "storm2"
> ("/opt/grid/tao/storm/storm2")
> 
> 3) delete the sub-folder "local_data" under "storm2"
> 
> 4) updated the storm.yaml under "storm2" with below change:
> 
>  storm.local.dir: "/opt/grid/tao/storm/storm2/local_data/storm"
>  supervisor.slots.ports:
> - 8700
> - 8701
> 
> 5) under "storm2", create a new supervisor.  
> 
> Then the new supervisor still has the 1st supervisor's ID.  And under
> "storm2", the sub-folder "local_data/storm" was not created.
> 
> Does storm still use the 1st storm home directory ("storm/storm-0.8.2")
> "local_data" folder?
> 
> Thanks,
> -Tao
> 
> -Original Message-
> From: Harsha [mailto:st...@harsha.io] 
> Sent: Friday, August 22, 2014 11:28 AM
> To: user@storm.incubator.apache.org
> Subject: Re: Create multiple supervisors on same node
> 
> Tao,
>  you need to delete the storm-local dir under your copied over storm
>  dir ( "storm2"). Otherwise it will still pick up the same
>  supervisor-id.
> -Harsha
> 
> On Fri, Aug 22, 2014, at 08:16 AM, Yu, Tao wrote:
> > Thanks Derek!
> > 
> > I tried your suggestion, copied the entire storm home directory 
> > (which, in my case, is "storm-0.8.2") to a new directory "storm2", 
> > then in "storm2" directory, I changed the conf/storm.yaml with 
> > different ports, and tried to create a new supervisor. Still, got the 
> > same supervisor ID as the 1st one (which I created from "storm-0.8.2" 
> > directory).
> > 
> > Did I do anything incorrectly?
> > 
> > -Tao
> > 
> > -Original Message-
> > From: Derek Dagit [mailto:der...@yahoo-inc.com]
> > Sent: Friday, August 22, 2014 11:01 AM
> > To: user@storm.incubator.apache.org
> > Subject: Re: Create multiple supervisors on same node
> > 
> > The two supervisors are sharing the same state, and that is how they 
> > get the same randomly-generated ID.
> > 
> > If I recall correctly, the default state directory is created in the 
> > current working directory of the process, so that is whatever 
> > directory you happen to be in when you start the supervisor.
> > 
> > I think probably a good thing to do is copy the entire storm home 
> > directory, change the storm.yaml in the copy to be configured with 
> > different ports as you tried, and make sure to cd into the appropriate 
> > directory when you launch the supervisor.
> > 
> > --
> > Derek
> > 
> > On 8/22/14, 9:49, Yu, Tao wrote:
> > > Hi all,
> > >
> > > Anyone knows what's the requirement to generate multiple supervisors on 
> > > the same node (for same topology)?  I can create the 1st supervisor, then 
> > > I update the "supervisor.slots.ports" to different ports, and tried to 
> > > create the 2nd supervisor on same node, it ends up creating a new 
> > > supervisor but with same supervisor ID as the 1st one, so it still only 
> > > has one supervisor on that node and storm UI shows 1 supervisor as well.  
> > > Any suggestion on how to create the 2nd, 3rd supervisor on the same node?
> > >
> > > Any help is appreciated!
> > >
> > > thanks,
> > > -Tao
> > >


Re: Create multiple supervisors on same node

2014-08-22 Thread Harsha
Tao,
 you need to delete the storm-local dir under your copied over storm
 dir ( "storm2"). Otherwise it will still pick up the same
 supervisor-id.
-Harsha

On Fri, Aug 22, 2014, at 08:16 AM, Yu, Tao wrote:
> Thanks Derek!
> 
> I tried your suggestion, copied the entire storm home directory (which,
> in my case, is "storm-0.8.2") to a new directory "storm2", then in
> "storm2" directory, I changed the conf/storm.yaml with different ports,
> and tried to create a new supervisor. Still, got the same supervisor ID
> as the 1st one (which I created from "storm-0.8.2" directory).
> 
> Did I do anything incorrectly?
> 
> -Tao
> 
> -Original Message-
> From: Derek Dagit [mailto:der...@yahoo-inc.com] 
> Sent: Friday, August 22, 2014 11:01 AM
> To: user@storm.incubator.apache.org
> Subject: Re: Create multiple supervisors on same node
> 
> The two supervisors are sharing the same state, and that is how they get
> the same randomly-generated ID.
> 
> If I recall correctly, the default state directory is created in the
> current working directory of the process, so that is whatever directory
> you happen to be in when you start the supervisor.
> 
> I think probably a good thing to do is copy the entire storm home
> directory, change the storm.yaml in the copy to be configured with
> different ports as you tried, and make sure to cd into the appropriate
> directory when you launch the supervisor.
> 
> -- 
> Derek
> 
> On 8/22/14, 9:49, Yu, Tao wrote:
> > Hi all,
> >
> > Anyone knows what's the requirement to generate multiple supervisors on the 
> > same node (for same topology)?  I can create the 1st supervisor, then I 
> > update the "supervisor.slots.ports" to different ports, and tried to create 
> > the 2nd supervisor on same node, it ends up creating a new supervisor but 
> > with same supervisor ID as the 1st one, so it still only has one supervisor 
> > on that node and storm UI shows 1 supervisor as well.  Any suggestion on 
> > how to create the 2nd, 3rd supervisor on the same node?
> >
> > Any help is appreciated!
> >
> > thanks,
> > -Tao
> >


Re: Storm Training/VM

2014-08-20 Thread Harsha
Hi,

For vms we have storm vagrant setup. More info on vagrant
here [1]http://www.vagrantup.com/.

You can try the vagrant setup
here [2]https://github.com/ptgoetz/storm-vagrant.

I noticed issues with the above to get it running with
virtualbox. Incase if you run into any of those try this

one [3]https://github.com/harshach/storm-vagrant.

-Harsha





On Wed, Aug 20, 2014, at 02:49 PM, Kreutzer, Edward wrote:

Two things:

1.Outside of the site tutorials and the few books out there,
can anyone point to some good/sanctioned training for Storm?

2.Also, often platforms have working VMs that can be downloaded
and tried out for new users.  Is there one out there, or are
there plans for the aforementioned?

Thanks for any feedback/insight.

Ted Kreutzer

Senior Database Developer/Engineer | IMT – Hadoop|
charlesSCHWAB

WARNING: All email sent to or from the Charles Schwab corporate
email system is subject to archiving, monitoring and/or review
by Schwab personnel.

References

1. http://www.vagrantup.com/
2. https://github.com/ptgoetz/storm-vagrant
3. https://github.com/harshach/storm-vagrant


Re: Reading config.priperties file

2014-08-20 Thread Harsha
Kushan,

   I guess its not able to find the config.properties file
when you deploy the topology. How are you packaging it. One way
I think of is to pass it as part of resources and where you
calling properties.load() in your topology.

-Harsha





On Wed, Aug 20, 2014, at 02:20 PM, Kushan Maskey wrote:

I am quite new to cluster environment and so is the entire
concept of sotrm/kafka. I have everything running as it should.
But I am struggling to read config.properties file that has
config information to Kafka/Cassandra/Solr/Couch databases. I
created a condif.properties file with all these information. I
load the config file on the topology. I set the properties as a
static variable in a class. Bolts call CassandraClient class
that I wrote to load any data that comes to KafkaSpout.
CassandraClient gets the cassandra host and other information
from the properties file.

This works perfectly fine locally. But when I deployed it on
the server, all these config variables are null, meaning that
cassandra host and other information are all null. If anyone
has any idea how to tackle this would be really great. Thanks.

--
Kushan Maskey


Re: Storm PROD Server log folder configuration issue

2014-08-18 Thread Harsha
Hi Yiming,

  Looks like we only have tag for the last release.
 you can checkout a tag "git checkout
tags/[1]v0.9.2-incubating".

Thanks,

-Harsha





On Sun, Aug 17, 2014, at 08:15 PM, Fang, Yiming  wrote:

Hi Harsha,

Thanks a lot for the help.

As long as we are working on 0.9.2. I will try building one
storm-core-0.9.2.jar to replace existing one on server.

BTW just to confirm, we do not have a 0.9.2 branch that I could
check out from GIT? Seems only thing we get is a 0.9.2 tag.

Regards,

Yiming

From: Harsha [mailto:st...@harsha.io]
Sent: Friday, August 15, 2014 11:44 PM
To: user@storm.incubator.apache.org
Subject: Re: Storm PROD Server log folder configuration issue

Hi Yiming,

   This is a known bug in 0.9.2
. [2]https://issues.apache.org/jira/browse/STORM-279.

The bug was that supervisor not forwarding
storm.server.log.path opt to the worker.

[3]https://github.com/apache/incubator-storm/commit/598acf97109
20028ed0c240dc6add02a895f2f48#diff-8a8d97993ededcb27c19504b9e88
9e6f . From 0.9.3 users can define STORM_LOG_DIR and all the
logs will be in that location. By default this would be under
STORM_HOME/logs.

-Harsha

On Fri, Aug 15, 2014, at 08:32 AM, Fang, Yiming  wrote:

Hi All,

I am currently working on a PROD server setup task on storm
0.9.2 .

Trying to configure storm server log folder inside storm python
script:

def exec_storm_class(klass, jvmtype="-server", jvmopts=[],
extrajars=[], args=[], fork=False):

global CONFFILE

all_args = [

"java", jvmtype, get_config_opts(),

"-Dstorm.home=" + STORM_DIR,

"-Dstorm.server.log.path=" + STORM_SERVER_LOG_PATH,

"-Djava.library.path=" + confvalue("java.library.path",
extrajars),

"-Dstorm.conf.file=" + CONFFILE,

"-cp", get_classpath(extrajars),

] + jvmopts + [klass] + list(args)

print "Running: " + " ".join(all_args)

if fork:

os.spawnvp(os.P_WAIT, "java", all_args)

else:

os.execvp("java", all_args) # replaces the current
process and never returns

I pass STORM_SERVER_LOG_PATHas system env parameter and then
inside cluster.xml I

Replace the file with new config





${storm.server.log.path}/logs/${logfile.name}

Result:

I could have all following logs in new place:

access.log, metrics.log, ui.log, nimbus.log, supervisor.log

but the worker-6702 6703 log just in the original server
location at the following place

when I setup my topology:

bash-3.2$ cd /opt/gpf/realtime/storm/0.9.2/bin

bash-3.2$ ls

storm  storm.cmd  storm-config.cmd
storm.server.log.path_IS_UNDEFINED

bash-3.2$ cd storm.server.log.path_IS_UNDEFINED/

bash-3.2$ ls

logs

bash-3.2$ cd logs

bash-3.2$ ls

access.log  metrics.log  worker-6702.log  worker-6703.log

Can anyone help?

Thanks and regards,

Yiming

References

1. 
https://github.com/apache/incubator-storm/commit/24d4a14de310cbbfebdc4a50d8cc9d86f9943087
2. 
https://urldefense.proofpoint.com/v1/url?u=https://issues.apache.org/jira/browse/STORM-279&k=wdHsQuqY0Mqq1fNjZGIYnA%3D%3D%0A&r=CRkaly%2Bvupx2pvTJzswpCvi%2F4%2BxH3geu9hee3ZD15Go%3D%0A&m=4QJR%2Fbp%2B9R22nGkeLzSyLrtQRw7ypah7qAvbo%2F%2F6o0c%3D%0A&s=e36e79c44edc3fd64d244944e74fb86355fbfb0c3db94b53eb09f08a437370e5
3. 
https://urldefense.proofpoint.com/v1/url?u=https://github.com/apache/incubator-storm/commit/598acf9710920028ed0c240dc6add02a895f2f48%23diff-8a8d97993ededcb27c19504b9e889e6f&k=wdHsQuqY0Mqq1fNjZGIYnA%3D%3D%0A&r=CRkaly%2Bvupx2pvTJzswpCvi%2F4%2BxH3geu9hee3ZD15Go%3D%0A&m=4QJR%2Fbp%2B9R22nGkeLzSyLrtQRw7ypah7qAvbo%2F%2F6o0c%3D%0A&s=1dfdba91e6603d009cbf355850a7787a882e0f351baf79c4795080d964005637


Re: How do i unregister from the group? Too many emails...:)

2014-08-17 Thread Harsha
you can send an email user-unsubscr...@storm.incubator.apache.org to
unsubscribe.
more info 
https://storm.incubator.apache.org/community.html

On Sun, Aug 17, 2014, at 10:58 AM, Joe Roberts wrote:
> " 
> Content-Transfer-Encoding: 7bit
> Mime-Version: 1.0 (1.0)
> 
> 
> 
> Sent from my iPhone


Re: Storm PROD Server log folder configuration issue

2014-08-15 Thread Harsha
Hi Yiming,

   This is a known bug in 0.9.2
. [1]https://issues.apache.org/jira/browse/STORM-279.

The bug was that supervisor not forwarding
storm.server.log.path opt to the worker.

[2]https://github.com/apache/incubator-storm/commit/598acf97109
20028ed0c240dc6add02a895f2f48#diff-8a8d97993ededcb27c19504b9e88
9e6f . From 0.9.3 users can define STORM_LOG_DIR and all the
logs will be in that location. By default this would be under
STORM_HOME/logs.

-Harsha





On Fri, Aug 15, 2014, at 08:32 AM, Fang, Yiming  wrote:

Hi All,



I am currently working on a PROD server setup task on storm
0.9.2 .

Trying to configure storm server log folder inside storm python
script:



def exec_storm_class(klass, jvmtype="-server", jvmopts=[],
extrajars=[], args=[], fork=False):

global CONFFILE

all_args = [

"java", jvmtype, get_config_opts(),

"-Dstorm.home=" + STORM_DIR,

"-Dstorm.server.log.path=" + STORM_SERVER_LOG_PATH,

"-Djava.library.path=" + confvalue("java.library.path",
extrajars),

"-Dstorm.conf.file=" + CONFFILE,

"-cp", get_classpath(extrajars),

] + jvmopts + [klass] + list(args)

print "Running: " + " ".join(all_args)

if fork:

os.spawnvp(os.P_WAIT, "java", all_args)

else:

os.execvp("java", all_args) # replaces the current
process and never returns



I pass STORM_SERVER_LOG_PATHas system env parameter and then
inside cluster.xml I

Replace the file with new config





${storm.server.log.path}/logs/${logfile.name}



Result:

I could have all following logs in new place:

access.log, metrics.log, ui.log, nimbus.log, supervisor.log



but the worker-6702 6703 log just in the original server
location at the following place

when I setup my topology:



bash-3.2$ cd /opt/gpf/realtime/storm/0.9.2/bin

bash-3.2$ ls

storm  storm.cmd  storm-config.cmd
storm.server.log.path_IS_UNDEFINED

bash-3.2$ cd storm.server.log.path_IS_UNDEFINED/

bash-3.2$ ls

logs

bash-3.2$ cd logs

bash-3.2$ ls

access.log  metrics.log  worker-6702.log  worker-6703.log



Can anyone help?



Thanks and regards,

Yiming

References

1. https://issues.apache.org/jira/browse/STORM-279
2. 
https://github.com/apache/incubator-storm/commit/598acf9710920028ed0c240dc6add02a895f2f48#diff-8a8d97993ededcb27c19504b9e889e6f


Re: java.io.InvalidClassException: backtype.storm.daemon.common.WorkerHeartbeat

2014-08-07 Thread Harsha
Make sure you bring down your topologies and stop all the storm daemons
and zookeeper.
>From your config it looks like /opt/storm-local is your dir .Delete the
this dir contents and check your zookeeper config zoo.cfg look for
dataDir location and delete the content.  Restart your zookeeper and
storm
-Harsha

On Thu, Aug 7, 2014, at 09:42 PM, Shun KAWAHARA wrote:
> Thank you for answering my questions.
> 
> However, I don't know how to clear storm-local and zookeeper data.
> Sorry, please tell me how to clear theirs.
> 
> Shun.
> 
> 2014-08-08 13:17 GMT+09:00 Harsha :
> > "local class
> > incompatible: stream classdesc serialVersionUID =
> > -6996865048894131652, local class serialVersionUID =
> > 2074174925015471843"
> >
> > The above error usually happens when the storm versions (usually
> > dependent jars ) differ.
> > If its new installation make sure you've same version of storm on every
> > node.
> > If you are upgrading clear storm-local and zookeeper data and restart
> > the daemons.
> > If it persists I'll try clearing storm-local and zookeeper data.
> > -Harsha
> >
> > On Thu, Aug 7, 2014, at 08:58 PM, Shun KAWAHARA wrote:
> >> Hello.
> >>
> >> I started Storm by the following constitution.
> >>
> >> server1: nimbus, supervisor
> >> server2: supervisor
> >> server3: supervisor
> >> server4: supervisor
> >>
> >> However, An error has occurred in only server4.
> >> Supervisor's log of server4 is following.
> >> Please tell me the solution.
> >>
> >> Shun
> >>
> >>
> >> -
> >>
> >> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> >> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52
> >> GMT
> >> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> >> environment:host.name=server4
> >> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> >> environment:java.version=1.6.0_24
> >> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> >> environment:java.vendor=Sun Microsystems Inc.
> >> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> >> environment:java.home=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
> >> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> >> environment:java.class.path=/opt/storm/storm-core-*.jar:/opt/storm/storm-netty-*.jar:/opt/storm/storm-console-logging-*.jar:/opt/storm/lib/math.numeric-tower-0.0.1.jar:/opt/storm/lib/commons-io-2.4.jar:/opt/storm/lib/tools.logging-0.2.3.jar:/opt/storm/lib/objenesis-1.2.jar:/opt/storm/lib/reflectasm-1.07-shaded.jar:/opt/storm/lib/storm-core-0.9.2-incubating.jar:/opt/storm/lib/netty-3.6.3.Final.jar:/opt/storm/lib/meat-locker-0.3.1.jar:/opt/storm/lib/compojure-1.1.3.jar:/opt/storm/lib/ring-core-1.1.5.jar:/opt/storm/lib/javax.mail.jar:/opt/storm/lib/ring-servlet-0.3.11.jar:/opt/storm/lib/joda-time-2.0.jar:/opt/storm/lib/httpclient-4.3.3.jar:/opt/storm/lib/curator-client-2.4.0.jar:/opt/storm/lib/commons-logging-1.1.3.jar:/opt/storm/lib/junit-3.8.1.jar:/opt/storm/lib/minlog-1.2.jar:/opt/storm/lib/commons-lang-2.5.jar:/opt/storm/lib/disruptor-2.10.1.jar:/opt/storm/lib/zookeeper-3.4.5.jar:/opt/storm/lib/clj-stacktrace-0.2.4.jar:/opt/storm/lib/ring-jetty-adapter-0.3.11.jar:/opt/storm/lib/clout-1.0.1.jar:/opt/storm/lib/commons-beanutils-1.8.3.jar:/opt/storm/lib/tools.macro-0.1.0.jar:/opt/storm/lib/commons-exec-1.1.jar:/opt/storm/lib/kryo-2.21.jar:/opt/storm/lib/logback-core-1.0.6.jar:/opt/storm/lib/httpcore-4.3.2.jar:/opt/storm/lib/curator-framework-2.4.0.jar:/opt/storm/lib/servlet-api-2.5.jar:/opt/storm/lib/jgrapht-core-0.9.0.jar:/opt/storm/lib/clojure-1.5.1.jar:/opt/storm/lib/jetty-util-6.1.26.jar:/opt/storm/lib/servlet-api-2.5-20081211.jar:/opt/storm/lib/carbonite-1.4.0.jar:/opt/storm/lib/hiccup-0.3.6.jar:/opt/storm/lib/javamail.jar:/opt/storm/lib/netty-3.2.2.Final.jar:/opt/storm/lib/json-lib-2.4-jdk15.jar:/opt/storm/lib/jline-2.11.jar:/opt/storm/lib/chill-java-0.3.5.jar:/opt/storm/lib/clj-time-0.4.1.jar:/opt/storm/lib/commons-codec-1.6.jar:/opt/storm/lib/tools.cli-0.2.4.jar:/opt/storm/lib/slf4j-api-1.6.5.jar:/opt/storm/lib/core.incubator-0.1.0.jar:/opt/storm/lib/snakeyaml-1.11.jar:/opt/storm/lib/logback-classic-1.0.6.jar:/opt/storm/lib/log4j-over-slf4j-1.6.6.jar:/opt/storm/lib/guava-13.0.jar:/opt/storm/lib/jetty-6.1.26.jar:/opt/storm/lib/activation.jar:/opt/storm/lib/json-simple-1.1.jar:/opt/storm/lib/commons-fileupload-1.2.1.jar:/opt/storm/lib/asm-4.0.jar:/opt/storm/lib/ezmorph-1.0.6.jar:/opt/storm/lib/commons-collections-3.2.1.jar:/opt/storm/lib/stanford-ner.jar:/opt/storm/lib/ring-devel-0.3.11.jar::/opt/storm/conf
> >> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> >> envir

Re: Storm Connection Refused

2014-08-07 Thread Harsha
Make sure iptables is not the issue. From which host you are
trying to deploy the jar.

It shouldn't be the issue but make sure nimbus host is
reachable by other servers.

"I am also unable to see the storm ui on nimbus.

could anybody please help me with this issue."

have you started all the required daemons you can access the UI
at hostname:8080..

check the nimbus logs make sure there are no errors and its
running.

Lastly check
storm-deploy [1]https://github.com/nathanmarz/storm-deploy .
Looks like you need to pass a private key to all the hosts to
start they services I don't have exp in deploying services on
aws.

-Harsha





On Thu, Aug 7, 2014, at 09:23 PM, Chandrahas Gurram wrote:

I have four instances running.
1 nimbus, 2 supervisor and 1 zookeeper
i have used the command storm jar pathtojar mainclass arguments
I have checked the storm.yaml file under .storm and it has
correct details of hosts.



ThankYou,
G V Chandrahas Raj


On Fri, Aug 8, 2014 at 9:30 AM, Harsha <[2]st...@harsha.io>
wrote:

Hi Chandrahas,
Can you provide with bit more details on how is
you cluster looks like or its a single host .
"I am deploying the jar on 6627 port and i have kept it open."
are you using storm jar command or doing it through thrift api.
-Harsha


On Thu, Aug 7, 2014, at 08:24 PM, Chandrahas Gurram wrote:


Hi,

I have deployed storm on an aws cluster.
When I try to deploy my jar on the cluster it throws the
following error
Exception in thread "main" java.lang.RuntimeException:
org.apache.thrift7.transport.TTransportException:
java.net.ConnectException: Connection refused
at
backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusCli
ent.java:38)
at
backtype.storm.StormSubmitter.submitTopology(StormSubmitter.jav
a:87)
at
backtype.storm.StormSubmitter.submitTopology(StormSubmitter.jav
a:58)
at
com.peel.kinesisStorm.SampleTopology.main(SampleTopology.java:8
8)
Caused by: org.apache.thrift7.transport.TTransportException:
java.net.ConnectException: Connection refused
at
org.apache.thrift7.transport.TSocket.open(TSocket.java:183)
at
org.apache.thrift7.transport.TFramedTransport.open(TFramedTrans
port.java:81)
at
backtype.storm.security.auth.SimpleTransportPlugin.connect(Simp
leTransportPlugin.java:83)
at
backtype.storm.security.auth.ThriftClient.(ThriftClient.j
ava:63)
at
backtype.storm.utils.NimbusClient.(NimbusClient.java:47)
at
backtype.storm.utils.NimbusClient.(NimbusClient.java:43)
at
backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusCli
ent.java:36)
... 3 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketI
mpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlain
SocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImp
l.java:182)
at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at
org.apache.thrift7.transport.TSocket.open(TSocket.java:178)
... 9 more

I have checked the ports of nimbus. I am deploying the jar on
6627 port and i have kept it open.
I am also unable to see the storm ui on nimbus.
could anybody please help me with this issue.
storm-version:0.92


ThankYou,
Chandra

References

1. https://github.com/nathanmarz/storm-deploy
2. mailto:st...@harsha.io


Re: java.io.InvalidClassException: backtype.storm.daemon.common.WorkerHeartbeat

2014-08-07 Thread Harsha
"local class
incompatible: stream classdesc serialVersionUID =
-6996865048894131652, local class serialVersionUID =
2074174925015471843"

The above error usually happens when the storm versions (usually
dependent jars ) differ.
If its new installation make sure you've same version of storm on every
node.
If you are upgrading clear storm-local and zookeeper data and restart
the daemons.
If it persists I'll try clearing storm-local and zookeeper data.
-Harsha

On Thu, Aug 7, 2014, at 08:58 PM, Shun KAWAHARA wrote:
> Hello.
> 
> I started Storm by the following constitution.
> 
> server1: nimbus, supervisor
> server2: supervisor
> server3: supervisor
> server4: supervisor
> 
> However, An error has occurred in only server4.
> Supervisor's log of server4 is following.
> Please tell me the solution.
> 
> Shun
> 
> 
> -
> 
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52
> GMT
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:host.name=server4
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:java.version=1.6.0_24
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:java.vendor=Sun Microsystems Inc.
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:java.home=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:java.class.path=/opt/storm/storm-core-*.jar:/opt/storm/storm-netty-*.jar:/opt/storm/storm-console-logging-*.jar:/opt/storm/lib/math.numeric-tower-0.0.1.jar:/opt/storm/lib/commons-io-2.4.jar:/opt/storm/lib/tools.logging-0.2.3.jar:/opt/storm/lib/objenesis-1.2.jar:/opt/storm/lib/reflectasm-1.07-shaded.jar:/opt/storm/lib/storm-core-0.9.2-incubating.jar:/opt/storm/lib/netty-3.6.3.Final.jar:/opt/storm/lib/meat-locker-0.3.1.jar:/opt/storm/lib/compojure-1.1.3.jar:/opt/storm/lib/ring-core-1.1.5.jar:/opt/storm/lib/javax.mail.jar:/opt/storm/lib/ring-servlet-0.3.11.jar:/opt/storm/lib/joda-time-2.0.jar:/opt/storm/lib/httpclient-4.3.3.jar:/opt/storm/lib/curator-client-2.4.0.jar:/opt/storm/lib/commons-logging-1.1.3.jar:/opt/storm/lib/junit-3.8.1.jar:/opt/storm/lib/minlog-1.2.jar:/opt/storm/lib/commons-lang-2.5.jar:/opt/storm/lib/disruptor-2.10.1.jar:/opt/storm/lib/zookeeper-3.4.5.jar:/opt/storm/lib/clj-stacktrace-0.2.4.jar:/opt/storm/lib/ring-jetty-adapter-0.3.11.jar:/opt/storm/lib/clout-1.0.1.jar:/opt/storm/lib/commons-beanutils-1.8.3.jar:/opt/storm/lib/tools.macro-0.1.0.jar:/opt/storm/lib/commons-exec-1.1.jar:/opt/storm/lib/kryo-2.21.jar:/opt/storm/lib/logback-core-1.0.6.jar:/opt/storm/lib/httpcore-4.3.2.jar:/opt/storm/lib/curator-framework-2.4.0.jar:/opt/storm/lib/servlet-api-2.5.jar:/opt/storm/lib/jgrapht-core-0.9.0.jar:/opt/storm/lib/clojure-1.5.1.jar:/opt/storm/lib/jetty-util-6.1.26.jar:/opt/storm/lib/servlet-api-2.5-20081211.jar:/opt/storm/lib/carbonite-1.4.0.jar:/opt/storm/lib/hiccup-0.3.6.jar:/opt/storm/lib/javamail.jar:/opt/storm/lib/netty-3.2.2.Final.jar:/opt/storm/lib/json-lib-2.4-jdk15.jar:/opt/storm/lib/jline-2.11.jar:/opt/storm/lib/chill-java-0.3.5.jar:/opt/storm/lib/clj-time-0.4.1.jar:/opt/storm/lib/commons-codec-1.6.jar:/opt/storm/lib/tools.cli-0.2.4.jar:/opt/storm/lib/slf4j-api-1.6.5.jar:/opt/storm/lib/core.incubator-0.1.0.jar:/opt/storm/lib/snakeyaml-1.11.jar:/opt/storm/lib/logback-classic-1.0.6.jar:/opt/storm/lib/log4j-over-slf4j-1.6.6.jar:/opt/storm/lib/guava-13.0.jar:/opt/storm/lib/jetty-6.1.26.jar:/opt/storm/lib/activation.jar:/opt/storm/lib/json-simple-1.1.jar:/opt/storm/lib/commons-fileupload-1.2.1.jar:/opt/storm/lib/asm-4.0.jar:/opt/storm/lib/ezmorph-1.0.6.jar:/opt/storm/lib/commons-collections-3.2.1.jar:/opt/storm/lib/stanford-ner.jar:/opt/storm/lib/ring-devel-0.3.11.jar::/opt/storm/conf
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib:/usr/lib64
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:java.io.tmpdir=/tmp
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:java.compiler=
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:os.name=Linux
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:os.arch=amd64
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:os.version=2.6.32-358.6.2.el6.x86_64
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:user.name=storm
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:user.home=/opt/storm
> 2014-08-07 22:47:22 o.a.z.ZooKeeper [INFO] Client
> environment:user.dir=/opt/storm
> 2014-08-07 22:47:22 o.a.z.s.ZooKeeperServer [INFO] Server
> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52
> GMT
> 2014-08-07 22:47:22 o.a.z.s.ZooKeeperServer [INFO] Server
>

Re: Storm Connection Refused

2014-08-07 Thread Harsha
Hi Chandrahas,

Can you provide with bit more details on how is
you cluster looks like or its a single host .

"I am deploying the jar on 6627 port and i have kept it open."
are you using storm jar command or doing it through thrift api.

-Harsha





On Thu, Aug 7, 2014, at 08:24 PM, Chandrahas Gurram wrote:


Hi,

I have deployed storm on an aws cluster.
When I try to deploy my jar on the cluster it throws the
following error
Exception in thread "main" java.lang.RuntimeException:
org.apache.thrift7.transport.TTransportException:
java.net.ConnectException: Connection refused
at
backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusCli
ent.java:38)
at
backtype.storm.StormSubmitter.submitTopology(StormSubmitter.jav
a:87)
at
backtype.storm.StormSubmitter.submitTopology(StormSubmitter.jav
a:58)
at
com.peel.kinesisStorm.SampleTopology.main(SampleTopology.java:8
8)
Caused by: org.apache.thrift7.transport.TTransportException:
java.net.ConnectException: Connection refused
at
org.apache.thrift7.transport.TSocket.open(TSocket.java:183)
at
org.apache.thrift7.transport.TFramedTransport.open(TFramedTrans
port.java:81)
at
backtype.storm.security.auth.SimpleTransportPlugin.connect(Simp
leTransportPlugin.java:83)
at
backtype.storm.security.auth.ThriftClient.(ThriftClient.j
ava:63)
at
backtype.storm.utils.NimbusClient.(NimbusClient.java:47)
at
backtype.storm.utils.NimbusClient.(NimbusClient.java:43)
at
backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusCli
ent.java:36)
... 3 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketI
mpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlain
SocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImp
l.java:182)
at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at
org.apache.thrift7.transport.TSocket.open(TSocket.java:178)
... 9 more

I have checked the ports of nimbus. I am deploying the jar on
6627 port and i have kept it open.
I am also unable to see the storm ui on nimbus.
could anybody please help me with this issue.
storm-version:0.92


ThankYou,
Chandra


Re: file not found exception in storm -jms

2014-08-05 Thread Harsha
Hi Siva,

 Are you packaging all the required classes into a jar
and submitting using storm jar your_jar_file.jar
HdfsFileTopology.

>From that error it looks like your jar file didn't contain the
class.

-Harsha





On Tue, Aug 5, 2014, at 01:47 AM, siva kumar wrote:

hi,

 Im trying with a scenario where,I read data from activeMQ
and process the data with storm and store the result in hdfs. I
have the jms-spout , hdfsbolt , hdfsfiletopology. When im
trying to submit my topology , it is not reading the
hdfsfiletopology class and throwing me an error "classnotfound
exception". But, i have cross-checked the location of the file
and everything is fine. Can anyone suggest the solution for
this?



Also, any suggestions regarding the correct requirements and
procedure to achieve the above requirement is thankfull.







Thanks and regards,

shiva


Re: Storm UI 0.9.2 bug (num workers displaying the num tasks and vice-versa)

2014-08-04 Thread Harsha
Hi Spico,

There is JIRA to track
this [1][1]https://issues.apache.org/jira/browse/STORM-382.

Thanks,

Harsha







On Mon, Aug 4, 2014, at 04:07 AM, Spico Florin wrote:

Hello!
  In the Storm UI 0.9.2  I have observed that the num executors
field is displaying the number of workers and the num workers
is displaying the num executors. Is this a reported bug?

Best regards,
 Florin

References

1. https://issues.apache.org/jira/browse/STORM-382


Re: topology.builtin.metrics.bucket.size.secs

2014-07-31 Thread Harsha
Hi Ahmed, 
It uses "topology.builtin.metrics.bucket.size.secs"
as a time bucket and calls registerMetric on all the metrics with the
specified time bucket above.
https://storm.incubator.apache.org/apidocs/backtype/storm/task/TopologyContext.html#registerMetric%28java.lang.String,%20backtype.storm.metric.api.ICombiner,%20int%29.
Storm will then call getValueAndReset on the metric every
timeBucketSizeInSecs  and the returned value is sent to all metrics
consumers.
 I am not sure about using thrift api to fetch metrics ( probably
 ok) but they will get you the latest metrics and as you notice they
 might be changing before duration you mentioned in
 "topology.builtin.metrics.bucket.size.secs".
Recommended way is to implement MetricsConsumer . Storm has
LoggingMetricsConsumer
https://github.com/apache/incubator-storm/blob/master/storm-core/src/jvm/backtype/storm/metric/LoggingMetricsConsumer.java.
Check this link on how to use that class
http://www.bigdata-cookbook.com/post/72320512609/storm-metrics-how-to.

-Harsha

On Thu, Jul 31, 2014, at 06:38 AM, Ahmed El Rheddane wrote:
> Hello,
> 
> I have been using Storm for a while now. I retrieve the builtin metrics 
> via a Thrift connection (I don't know if there is a better way to do 
> so). I regularly fetch the metrics and I can still see changes in the 
> values within durations inferior to the default 60 seconds for the 
> metrics bucket size. Can anybody help me understand how does Storm use 
> the value of topology.builtin.metrics.bucket.size.secs and how 
> frequently does it report the stats to Nimbus?
> 
> Thanks in advance.
> 
> Ahmed


Re: Bolt vs Spout

2014-07-29 Thread Harsha
Hi Adrian,

 KafkaSpout is a consumer in this case you would be
connecting to zookeeper and KafkaBolt which is a
producer(kafka) needs to connect to a list of broker
(localhost:9092). KafkaSpout uses SpoutConfig in which you can
add to ListzkServers and for KafkaBolt you can create a
Properties object with "metadata.broker.list" and pass a comma
separated strings.

-Harsha



On Tue, Jul 29, 2014, at 05:11 AM, Adrian Landman wrote:

I feel like you missed the issue in my question.  For the
connection string for ZkHosts, if I pass in localhost:9092 with
a default kafka configuration, it won't connect.  Instead I
have to pass in localhost:2181.  Is this expected behavior?
Also, if I wanted to pass in more than one host, what should
separate the entries?  Commas?



On Mon, Jul 28, 2014 at 4:17 PM, Parth Brahmbhatt
<[1]pbrahmbh...@hortonworks.com> wrote:

For setting a list of brokers in kafkaSpout, I believe there
are 2 options:

If you use StaticHosts then you need to add
GlobalPartitionInformation in which you have to specify each
partition and its corresponding broker host

GlobalPartitionInformation partitions = new
GlobalPartitionInformation();
partitions.addPartition(0,new Broker("10.22.2.79", 9092));
//add more partitions here.
BrokerHosts hosts = new StaticHosts(partitions);

Alternatively, If you use ZkHosts then you need to pass in the
complete zookeeper connection string e.g. lolcalhost:9092 ,
optionally there is a constructor that allows you to specify a
second argument which is zkroot , by default it is set
to /brokers which should work with default kafka installation.
The code in ZlHosts looks under zkroot/topics//partitions to figure out the number of
partitions and leader for each partition.

Thanks
Parth




On Mon, Jul 28, 2014 at 9:54 AM, Adrian Landman
<[2]adrian.land...@gmail.com> wrote:

I am writing a topology that pulls messages from a topic, does
some work, and then writes them back on a different topic.  I
have been having some issues so I created my own small topology
that just pulls a message, prints the contents, and then stores
them back on a new topic.  I finally got this to work, but it
raised a question.

To create the spout I need to either pass in the kafka location
sans port (e.g. localhost) or use 2181 as the port.

To create the producer bolt I need to pass in the broker port
(e.g. 9092) or I get an array out of bounds exception when
creating the Producer.

When we were using kafka7 and
[3]https://github.com/nathanmarz/storm-contrib/tree/master/stor
m-kafka/src/jvm/storm/kafka
for our storm/kafka integration I believe that we used the same
broker list for both our spout and our producer.  Is there
anyway to do the same with kafka8 and the new storm-kafka
project?  Also, if w want to pass in a list, I know that in the
KafkaBolt we can set metadata.broker.list to a comma separated
list of brokers ([4]1.1.1.1:9092, [5]1.1.1.2:9092) but can we
do the same for the spout?  Or is there any reason to?  ZkHost
takes in a String, but I didn't see anything that specified the
format.




--
Thanks
Parth



CONFIDENTIALITY NOTICE

NOTICE: This message is intended for the use of the individual
or entity to which it is addressed and may contain information
that is confidential, privileged and exempt from disclosure
under applicable law. If the reader of this message is not the
intended recipient, you are hereby notified that any printing,
copying, dissemination, distribution, disclosure or forwarding
of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender
immediately and delete it from your system. Thank You.

References

1. mailto:pbrahmbh...@hortonworks.com
2. mailto:adrian.land...@gmail.com
3. 
https://github.com/nathanmarz/storm-contrib/tree/master/storm-kafka/src/jvm/storm/kafka
4. http://1.1.1.1:9092/
5. http://1.1.1.2:9092/


Re: KafkaSpout showing lots of errors

2014-07-26 Thread Harsha


Ok I assume you are using KafkaSpout from strom/external
without any changes to the code. From the UI screenshot it
looks like your bolt is acknowledging messages . Enable system
stats on that topology page (Its at the bottom of the page) and
check if the ackers are running without any errors. I guess the
reason it might be happening is that your spout is not
receiving acks for all the messages processed by your bolt
hence failing them and your kafka offset won't move forward
because of these failures. I will also look for any kafka
errors for your consumer id. You are running 4 topologies that
are reading from kafka and these topologies are reading from
different topics and/or have their own unique consumerid +
topicids.



On Sat, Jul 26, 2014, at 11:16 AM, Anuj Agrawal wrote:

Hi Harsha,

I don't see any errors in UI or logs. I just see failure counts
increasing. See screenshot attached.

Logs are filled up with the lines of kind that I showed earlier
- committing offsets and fetching messages. The offsets
sometimes decrease by 1 and sometimes move forward by a very
small number. However, approx 1500 messages are fetched every
time.

I do see new messages being inserted into kafka. Have verified
that.

Thanks,
Anuj



On Sat, Jul 26, 2014 at 10:37 PM, Harsha <[1]st...@harsha.io>
wrote:

Hi Anuj,
can you also send the errors what you are seeing in UI
and also in logs. Are you seeing new messages inserted into
your kafka topics just to make sure there aren't issues with
your kafka .
-Harsha


On Sat, Jul 26, 2014, at 05:47 AM, Anuj Agrawal wrote:

I am running 4 topologies on a storm cluster each with one bolt
and one kafka spout. Of these, 3 are showing a high number of
failures (in UI) in the spout itself. I looked at the logs and
found that the offset isn't just moving (in fact, at times it
is reduced by one). Sample log for one of the partitions below:


anuj.agrawal@server-ingestion1:/var/log/storm$ grep "2014-07-26
17:5" worker-6704.log | grep "partition=24"
2014-07-26 17:50:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:50:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:50:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:50:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:51:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:51:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:51:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:51:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:52:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:52:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:52:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:52:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:53:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:53:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:53:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:53:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:54:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:54:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:54:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:54:57 s.k.PartitionManager [INFO] Committed
offse

Re: KafkaSpout showing lots of errors

2014-07-26 Thread Harsha
Hi Anuj,

can you also send the errors what you are seeing in UI
and also in logs. Are you seeing new messages inserted into
your kafka topics just to make sure there aren't issues with
your kafka .

-Harsha





On Sat, Jul 26, 2014, at 05:47 AM, Anuj Agrawal wrote:

I am running 4 topologies on a storm cluster each with one bolt
and one kafka spout. Of these, 3 are showing a high number of
failures (in UI) in the spout itself. I looked at the logs and
found that the offset isn't just moving (in fact, at times it
is reduced by one). Sample log for one of the partitions below:


anuj.agrawal@server-ingestion1:/var/log/storm$ grep "2014-07-26
17:5" worker-6704.log | grep "partition=24"
2014-07-26 17:50:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:50:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:50:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:50:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:51:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:51:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:51:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:51:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:52:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:52:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:52:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:52:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:53:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:53:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:53:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:53:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:54:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:54:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:54:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:54:57 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:55:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:55:27 s.k.PartitionManager [INFO] Committed
offset 96435 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:55:57 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:55:57 s.k.ZkState [INFO] Writing
/server/cp-kafka/AndroidAppEventIngestion/partition_24 the data
{topology={id=8fba6b24-e1cd-4476-91a6-bb493a0e7c87,
name=AndriodAppEventIngestion}, offset=96434, partition=24,
broker={host=server-kafka5.local, port=9092}, topic=AndroidApp}
2014-07-26 17:55:57 s.k.PartitionManager [INFO] Committed
offset 96434 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb493a0e7c87
2014-07-26 17:56:27 s.k.PartitionManager [INFO] Committing
offset for Partition{host=server-kafka5.local:9092,
partition=24}
2014-07-26 17:56:27 s.k.PartitionManager [INFO] Committed
offset 96434 for Partition{host=server-kafka5.local:9092,
partition=24} for topology:
8fba6b24-e1cd-4476-91a6-bb

Re: Can i get the metrics(data) from storm cluster regarding traffic load

2014-07-25 Thread Harsha
You can look at it in storm ui when you click a topology from
the main page it will be in topology page with heading
"Topology Visualization" under Bolts section.





On Fri, Jul 25, 2014, at 12:26 AM, Spico Florin wrote:

Hello!
I'm interesting in this subject too. Can you please point out
where in the StormUI you'll find this feature?
Thanks.
 Best regards,
  Florin



On Fri, Jul 25, 2014 at 4:36 AM, Srinath C
<[1]srinat...@gmail.com> wrote:

I think the latest storm 0.9.2-incubating has a graphical
representation of your topology with the link highlighting the
load between the components. Maybe you should try that.



On Thu, Jul 24, 2014 at 7:00 PM, M.Tarkeshwar Rao
<[2]tarkeshwa...@gmail.com> wrote:

Hi All

I want to get some metrics from the storm cluster regarding
what is traffic load on each link.
like in following topology:
A is the spout and rest all are bolts. I want to know the
current traffic on each link like (A to B or C to D)
A->B->C--->D--->E
B->F--->D


How can i find this? Is it possible. I want to schedule my
schedule my topology based on these results.

Regards
Tarkeshwar

References

1. mailto:srinat...@gmail.com
2. mailto:tarkeshwa...@gmail.com


Re: KafkaSpout offsets

2014-07-24 Thread Harsha
"Start at the first (oldest) message on the topic: set
forceFromStart = true" Yes

 "Start at the last (newest) message on the topic : ?"

 Current version of kafkaspout doesn't offer this config.
Kafka OffsetRequest Api does provide this option

[1]https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To
+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetRequest

 can you please file a jira for this.

"Start at the last saved offset : Don't change the config
defaults" Yes

"Start at an explicit offset: ? (I don't envision needing to
use this, but just in case)"

   As far as I know there is no api to do this at Kafka it
self.  Here is an approach that talks about changing offsets in
zookeeper

[2]https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Ho
wcanIrewindtheoffsetintheconsumer?  IMO not recommended unless
its very rarely done to reprocess data.



 "public boolean useStartOffsetTimeIfOffsetOutOfRange = true if
an offset is found "

This options exist incase if the user has not read from
KafkaQueue and log.retention.hours elapsed in that case kafka
deleted older data and the zookeeper has older offset which
points to deleted data. if we starts from this offset it will
throw OffsetOutOfRangeException so to work around this scenario
if its throws such exception we starts from the beginning of
the queue.



On Thu, Jul 24, 2014, at 01:08 PM, Adrian Landman wrote:

Thanks!  That helps clear things up some.  So if forceFromStart
is true it will force it to start at the beginning.  If nothing
is changed it will try and start from the last committed
offset, but if there is no committed offset where will it
start?  What if there is a saved offset, but we want to force
it to start at the end?  Or if we want to force a particular
offset, not the last saved one?  I'm guessing that based on
public boolean useStartOffsetTimeIfOffsetOutOfRange = true if
an offset is found that is out of the range, it will start at
the start/beginning offset?

Essentially what I want to be able to specify the following
conditions:
Start at the first (oldest) message on the topic: set
forceFromStart = true
Start at the last (newest) message on the topic : ?
Start at the last saved offset : Don't change the config
defaults
Start at an explicit offset: ? (I don't envision needing to use
this, but just in case)



On Thu, Jul 24, 2014 at 1:40 PM, Harsha <[3]st...@harsha.io>
wrote:

Hi Adrian,
   If you set forceFromStart to true it calls
KafkaApi.Offset to get the earliest time, which finds the
beginning of the kafka logs and starts the streaming from
there. By default this is set to false and it makes a request
to Kafka to find whats the last committed offset and streams it
from there. You can control how often kafka offset needs to be
committed by using SpoutConfig.stateUpdateIntervalMs by default
its 2000 ms.
-Harsha



On Thu, Jul 24, 2014, at 12:27 PM, Adrian Landman wrote:

In nathanmarz/storm-contrib project there was a KafkaConfig
that had a forceOffsetTime.  In our code someone had documented
that calling this with different values would affect the
offsets in the following way:

-2 Will start at the beginning (earliest message) of the topic
-1 Will start at the end (latest message) of the topic
-3 Will start where the spout left off
And anthing >0 will start at the specified offset.

In the new project external/storm-kafka there is also a
KafkaConfig and I see that it exposes
public boolean forceFromStart = false;
public long startOffsetTime =
kafka.api.OffsetRequest.EarliestTime();
public long maxOffsetBehind = 10;
public boolean useStartOffsetTimeIfOffsetOutOfRange = true;

By default does this mean the spout will start at the beginning
of the topic?  What does the forceFromStart do?  If we want to
start from whatever offset the spout was last processing, is
there anyway to do this?

References

1. 
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetRequest
2. 
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowcanIrewindtheoffsetintheconsumer
3. mailto:st...@harsha.io


Re: KafkaSpout offsets

2014-07-24 Thread Harsha
Hi Adrian,

   If you set forceFromStart to true it calls
KafkaApi.Offset to get the earliest time, which finds the
beginning of the kafka logs and starts the streaming from
there. By default this is set to false and it makes a request
to Kafka to find whats the last committed offset and streams it
from there. You can control how often kafka offset needs to be
committed by using SpoutConfig.stateUpdateIntervalMs by default
its 2000 ms.

-Harsha







On Thu, Jul 24, 2014, at 12:27 PM, Adrian Landman wrote:

In nathanmarz/storm-contrib project there was a KafkaConfig
that had a forceOffsetTime.  In our code someone had documented
that calling this with different values would affect the
offsets in the following way:

-2 Will start at the beginning (earliest message) of the topic
-1 Will start at the end (latest message) of the topic
-3 Will start where the spout left off
And anthing >0 will start at the specified offset.

In the new project external/storm-kafka there is also a
KafkaConfig and I see that it exposes
public boolean forceFromStart = false;
public long startOffsetTime =
kafka.api.OffsetRequest.EarliestTime();
public long maxOffsetBehind = 10;
public boolean useStartOffsetTimeIfOffsetOutOfRange = true;

By default does this mean the spout will start at the beginning
of the topic?  What does the forceFromStart do?  If we want to
start from whatever offset the spout was last processing, is
there anyway to do this?


Re: Storm UI : handle custom stream as a system one

2014-07-21 Thread Harsha
the code is in core.clj in mk-include-sys-fn which calls
system-id?(common.clj). I think UI code is fine but nimbus
won't accept a topology which has stream-id with "__" at the
beginning. We can probably add a exception in nimbus if a
stream starts with "__" and its in storm.user.system.streams.
Can you please file a jira to track this. Thanks.

-Harsha





On Mon, Jul 21, 2014, at 08:59 AM, Julien Nioche wrote:

Yes, that's also the conclusion I came to.
I could not find where in the UI code is the call
to Utils.isSystemId(String). One option would be to be able to
define in the configuration a list of streams to treat as
system. Does the UI code access the configuration files at all?

Thanks Harsha



On 21 July 2014 16:43, Harsha <[1]st...@harsha.io> wrote:

thats caused by validate-ids! function which checks if the
users stream id is system id and throws that exception.
So looks like "__" reserved for system streams only not allowed
for users.


On Mon, Jul 21, 2014, at 08:30 AM, Julien Nioche wrote:

Hi Harsha

Am getting :

5935 [main] WARN  backtype.storm.daemon.nimbus - Topology
submission exception. (topology name='QueuePopulator')
#
5941 [main] ERROR
org.apache.zookeeper.server.NIOServerCnxnFactory - Thread
Thread[main,5,main] died
backtype.storm.generated.InvalidTopologyException: null
at
backtype.storm.daemon.common$validate_ids_BANG_.invoke(common.c
lj:126) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.daemon.common$validate_basic_BANG_.invoke(common
.clj:142) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.daemon.common$system_topology_BANG_.invoke(commo
n.clj:297) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]

Thanks

Julien



On 21 July 2014 16:22, Harsha <[2]st...@harsha.io> wrote:

Hi Julien,
UI code calls Utils.isSystemId(String) which checks if
the stream id starts with "__". What error are you seeing when
you renamed into "__log".
-Harsha


On Mon, Jul 21, 2014, at 03:45 AM, Julien Nioche wrote:

Hi,

I have a custom stream for handling logs (called '_log') and
send them to ElasticSearch for indexing. The log tuples are
generated by my spouts and bolts. My pipeline also uses the
default stream for the normal processing of tuples from
RabbitMQ.

Everything works fine but I would like to be able to treat this
_log stream as one of the system ones (e.g. __metrics) and be
able to hide them from the stats. The summary of Emitted /
Transferred currently takes these log events into account which
is not very useful.

I tried renaming the stream into '__log' but this resulted in
an error when trying to start the topoloy.

Any idea of how I could do that?

Thanks

Julien

--
[logo.gif]
Open Source Solutions for Text Engineering

[3]http://digitalpebble.blogspot.com/
[4]http://www.digitalpebble.com
[5]http://twitter.com/digitalpebble





--
[logo.gif]
Open Source Solutions for Text Engineering

[6]http://digitalpebble.blogspot.com/
[7]http://www.digitalpebble.com
[8]http://twitter.com/digitalpebble





--
[logo.gif]
Open Source Solutions for Text Engineering

[9]http://digitalpebble.blogspot.com/
[10]http://www.digitalpebble.com
[11]http://twitter.com/digitalpebble

References

1. mailto:st...@harsha.io
2. mailto:st...@harsha.io
3. http://digitalpebble.blogspot.com/
4. http://www.digitalpebble.com/
5. http://twitter.com/digitalpebble
6. http://digitalpebble.blogspot.com/
7. http://www.digitalpebble.com/
8. http://twitter.com/digitalpebble
9. http://digitalpebble.blogspot.com/
  10. http://www.digitalpebble.com/
  11. http://twitter.com/digitalpebble


Re: Storm UI : handle custom stream as a system one

2014-07-21 Thread Harsha
thats caused by validate-ids! function which checks if the
users stream id is system id and throws that exception.

So looks like "__" reserved for system streams only not allowed
for users.





On Mon, Jul 21, 2014, at 08:30 AM, Julien Nioche wrote:

Hi Harsha

Am getting :

5935 [main] WARN  backtype.storm.daemon.nimbus - Topology
submission exception. (topology name='QueuePopulator')
#
5941 [main] ERROR
org.apache.zookeeper.server.NIOServerCnxnFactory - Thread
Thread[main,5,main] died
backtype.storm.generated.InvalidTopologyException: null
at
backtype.storm.daemon.common$validate_ids_BANG_.invoke(common.c
lj:126) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.daemon.common$validate_basic_BANG_.invoke(common
.clj:142) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.daemon.common$system_topology_BANG_.invoke(commo
n.clj:297) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]

Thanks

Julien



On 21 July 2014 16:22, Harsha <[1]st...@harsha.io> wrote:

Hi Julien,
UI code calls Utils.isSystemId(String) which checks if
the stream id starts with "__". What error are you seeing when
you renamed into "__log".
-Harsha


On Mon, Jul 21, 2014, at 03:45 AM, Julien Nioche wrote:

Hi,

I have a custom stream for handling logs (called '_log') and
send them to ElasticSearch for indexing. The log tuples are
generated by my spouts and bolts. My pipeline also uses the
default stream for the normal processing of tuples from
RabbitMQ.

Everything works fine but I would like to be able to treat this
_log stream as one of the system ones (e.g. __metrics) and be
able to hide them from the stats. The summary of Emitted /
Transferred currently takes these log events into account which
is not very useful.

I tried renaming the stream into '__log' but this resulted in
an error when trying to start the topoloy.

Any idea of how I could do that?

Thanks

Julien

--
[logo.gif]
Open Source Solutions for Text Engineering

[2]http://digitalpebble.blogspot.com/
[3]http://www.digitalpebble.com
[4]http://twitter.com/digitalpebble





--
[logo.gif]
Open Source Solutions for Text Engineering

[5]http://digitalpebble.blogspot.com/
[6]http://www.digitalpebble.com
[7]http://twitter.com/digitalpebble

References

1. mailto:st...@harsha.io
2. http://digitalpebble.blogspot.com/
3. http://www.digitalpebble.com/
4. http://twitter.com/digitalpebble
5. http://digitalpebble.blogspot.com/
6. http://www.digitalpebble.com/
7. http://twitter.com/digitalpebble


Re: Storm UI : handle custom stream as a system one

2014-07-21 Thread Harsha
Hi Julien,

UI code calls Utils.isSystemId(String) which checks if
the stream id starts with "__". What error are you seeing when
you renamed into "__log".

-Harsha





On Mon, Jul 21, 2014, at 03:45 AM, Julien Nioche wrote:

Hi,

I have a custom stream for handling logs (called '_log') and
send them to ElasticSearch for indexing. The log tuples are
generated by my spouts and bolts. My pipeline also uses the
default stream for the normal processing of tuples from
RabbitMQ.

Everything works fine but I would like to be able to treat this
_log stream as one of the system ones (e.g. __metrics) and be
able to hide them from the stats. The summary of Emitted /
Transferred currently takes these log events into account which
is not very useful.

I tried renaming the stream into '__log' but this resulted in
an error when trying to start the topoloy.

Any idea of how I could do that?

Thanks

Julien

--
[logo.gif]
Open Source Solutions for Text Engineering

[1]http://digitalpebble.blogspot.com/
[2]http://www.digitalpebble.com
[3]http://twitter.com/digitalpebble

References

1. http://digitalpebble.blogspot.com/
2. http://www.digitalpebble.com/
3. http://twitter.com/digitalpebble


Re: storm upgrade issue

2014-07-17 Thread Harsha
Does your worker node also have the same storm version
installed make sure your older STORM_HOME is not in PATH.





On Thu, Jul 17, 2014, at 06:39 PM, 唐思成 wrote:

the step i took listed below





1. kill -9 all storm process

2. remove storm directory on zookeeper

3. change storm local dir

4. start nimbus and ui (is fine)

5. start supervisor on a worknode( the nimbus goes down)





2014-07-18
__

唐思成
  __

发件人: Itai Frenkel

发送时间: 2014-07-18  00:16:21

收件人: storm_user

抄送:

主题: RE: storm upgrade issue



The message says that SupervisorInfo that your code was
compiled with is not compatible with the SupervisorInfo that
was received over the network.

That happens when you have a Serializable class that changes
and there is no explicit backwards compatibility in place.

I would first check that all of your Storm instances are
running the same version.

If that does not help I would check that you code is compiled
against the correct code version.

Please report your findings, since it's interesting :)

full disclosure - I'm a Storm newbie,

Itai
  __

From: 唐思成 
Sent: Thursday, July 17, 2014 2:23 PM
To: storm_user
Subject: storm upgrade issue

Hi all:
I try to upgrade storm from 0.9.1 to 0.9.2-incubating, and when
the worknode supervisor startup, the nimbus process goes down,
here is what the nimbus.log say:

Before upgrade, I already change storm.local.dir: to a new
location and remove storm node in zookeeper using zkCli.sh,
however that dont help.

AnyIdea?

2014-07-17 19:15:29 b.s.d.nimbus [ERROR] Error when processing
event
java.lang.RuntimeException: java.io.InvalidClassException: back
type.storm.daemon.common.SupervisorInfo; local class incompatib
le: stream classdesc serialVersionUID = 7648414326720210054, lo
cal class serialVersionUID = 7463898661547835557
at backtype.storm.utils.Utils.deserialize(Utils.java:93
) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.cluster$maybe_deserialize.invoke(clus
ter.clj:200) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating
]
at backtype.storm.cluster$mk_storm_cluster_state$reify_
_2284.supervisor_info(cluster.clj:299) ~[storm-core-0.9.2-incub
ating.jar:0.9.2-incubating]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method) ~[na:na]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMe
thodAccessorImpl.java:39) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Dele
gatingMethodAccessorImpl.java:25) ~[na:na]
at java.lang.reflect.Method.invoke(Method.java:597) ~[n
a:na]
at clojure.lang.Reflector.invokeMatchingMethod(Reflecto
r.java:93) ~[clojure-1.5.1.jar:na]
at clojure.lang.Reflector.invokeInstanceMethod(Reflecto
r.java:28) ~[clojure-1.5.1.jar:na]
at backtype.storm.daemon.nimbus$all_supervisor_info$fn_
_4715.invoke(nimbus.clj:277) ~[storm-core-0.9.2-incubating.jar:
0.9.2-incubating]
at clojure.core$map$fn__4207.invoke(core.clj:2487) ~[cl
ojure-1.5.1.jar:na]
at clojure.lang.LazySeq.sval(LazySeq.java:42) ~[clojure
-1.5.1.jar:na]
at clojure.lang.LazySeq.seq(LazySeq.java:60) ~[clojure-
1.5.1.jar:na]
at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.5.1.jar
:na]
at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.5.
1.jar:na]
at clojure.core$apply.invoke(core.clj:617) ~[clojure-1.
5.1.jar:na]
at clojure.core$mapcat.doInvoke(core.clj:2514) ~[clojur
e-1.5.1.jar:na]
at clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojur
e-1.5.1.jar:na]
at backtype.storm.daemon.nimbus$all_supervisor_info.inv
oke(nimbus.clj:275) ~[storm-core-0.9.2-incubating.jar:0.9.2-inc
ubating]
at backtype.storm.daemon.nimbus$all_scheduling_slots.in
voke(nimbus.clj:288) ~[storm-core-0.9.2-incubating.jar:0.9.2-in
cubating]
at backtype.storm.daemon.nimbus$compute_new_topology__G
T_executor__GT_node_PLUS_port.invoke(nimbus.clj:580) ~[storm-co
re-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.daemon.nimbus$mk_assignments.doInvoke
(nimbus.clj:660) ~[storm-core-0.9.2-incubating.jar:0.9.2-incuba
ting]
at clojure.lang.RestFn.invoke(RestFn.java:410) ~[clojur
e-1.5.1.jar:na]
at backtype.storm.daemon.nimbus$fn__5210$exec_fn__1396_
_auto5211$fn__5216$fn__5217.invoke(nimbus.clj:905) ~[storm-
core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.daemon.nimbus$fn__5210$exec_fn__1396_
_auto5211$fn__5216.invoke(nimbus.clj:904) ~[storm-core-0.9.
2-incubating.jar:0.9.2-incubating]
at backtype.storm.timer$schedule_recurring$this__1134.i
nvoke(timer.clj:99) ~[storm-core-0.9.2-incubating.jar:0.9.2-inc
ubating]
at backtype.storm.timer$mk_timer$fn__1117$fn__1118.invo
ke(timer.clj:50) ~[storm-core-0.9.2-incubating.jar:0.9.2-incuba
ting]
at backtype.storm.timer$mk_timer$

Re: Storm not working in local mode

2014-07-16 Thread Harsha


Rushabh,

   Looks to be ipv6 issue. Can you try
passing -Djava.net.preferIPv4Stack=true.

-Harsha



On Wed, Jul 16, 2014, at 04:36 PM, Rushabh Shah wrote:

Hi ,

I upgraded Storm to the latest 0.9.2-incubating and I see that
my topology does not start in local mode.

It does however work perfectly fine when I deploy the topology
on a storm cluster.

I see the following exception when I run the topology in the
local mode :

[ERROR] ClientCnxnSocketNIO - Unable to open socket to
0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2000

[WARN] ClientCnxn - Session 0x0 for server null, unexpected
error, closing socket connection and attempting reconnect

java.net.SocketException: Address family not supported by
protocol family: connect

at sun.nio.ch.Net.connect (Native Method)

at sun.nio.ch.SocketChannelImpl.connect
(SocketChannelImpl.java:500)

at
org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect
(ClientCnxnSocketNIO.java:266)

at org.apache.zookeeper.ClientCnxnSocketNIO.connect
(ClientCnxnSocketNIO.java:276)

at org.apache.zookeeper.ClientCnxn$SendThread.startConnect
(ClientCnxn.java:958)

at org.apache.zookeeper.ClientCnxn$SendThread.run
(ClientCnxn.java:993)

[WARN] ConnectionStateManager - There are no
ConnectionStateListeners registered.

Any help will be appreciated.

Thanks,

Rushabh


Re: writing huge amount of data to HDFS

2014-07-12 Thread Harsha
Hi Chen,

  I thought your bolt was the one doing reading from ES
and there is no spout. I suppose its ok since the ES queries
are flowing from kafka.  Did you measure Hbase bolt's execute
method. It looks like its making read call on hbase for each
tuple emitted from ES bolt. From what I see ES bolt emits bunch
of tuples and it goes to Hbase bolt which makes call to hbase
db it might be hanging there to get the results from hbase
query which makes it slower to consume from ES bolt.

Ideally if you can batch tuples to hbase query it will speed up
instead of making a call for every tuple or you can reduce the
batch size for ES query and emit fewer tuples instead of 15k at
a time . Increasing parallelism of hbase bolt might not be
helpful as you increase the no.of connections to hbase. I start
with measuring HbaseBolt execute method latency and reduce the
ES batch size , try to batch up hbase reads.

-Harsha





On Sat, Jul 12, 2014, at 12:33 AM, Chen Wang wrote:

Thanks Harsha.
My spout is listening to a kafka queue which contains the es
query from user's input. Is it safe to spawn a thread in the
spout and do the ES query directly in the spout? What is the
fundamental difference in doing the query in a thread of spout
VS a thread of bolt?

The reason of using flume is that I have to split the data into
different partitions(hdfs folders) depending on the value of
the bolt: meaning I will need to modify the hdfs bolt any ways.
In the past, i tried to shift large amount of data to a
partitioned hive table using this approach(avro to flume to
hdfs), and it seems to working well. Thus i stick to this
approach without reinventing the wheel.

Thanks,
Chen


On Fri, Jul 11, 2014 at 4:51 PM, Harsha <[1]st...@harsha.io>
wrote:

Hi Chen,
  I looked at your code. The first part is inside a
Bolt's execute method ?  and it looks like fetching all the
data (1 per call)  from a elastic search and emitting each
value from inside the execute method which ends when the ES
result set runs out.
It doesn't look like you followed storm's conventions here was
there any reason not use Spout here . A bolt' execute method
gets called for every tuple that's getting passed. Docs on
spout &
bolt [2]https://storm.incubator.apache.org/documentation/Concep
ts.html

from your comment in the code "1 hits per shard will be
returned for each scroll" and if it taking longer  read 1
records from ES I would suggest you to reduce this batch size
". The idea here is you are making quicker calls to ES and
pushing the data downstream and making another call to ES for
the next batch instead of acquiring one big batch in single
call.

 "i am  getting around 15000 entries in a batch, the query
itself takes about 4second, however, he emit method in the
query bolt takes about 20 seconds." Can you try reducing the
batch size here too it looks like the time is taking emitting
15k entries at one go.

  Was there any reason/utility of using flume to write
to hdfs. If not I would recommend
using [3]https://github.com/ptgoetz/storm-hdfs bolt .



On Fri, Jul 11, 2014, at 03:37 PM, Chen Wang wrote:

Here is the output from the ES query bolt:
 "Total execution time for this batch: 179655(millisecond)" is
the call time around .emit. As you can see, to emit 14000
entries, it takes
anytime from 145231 to 18



On Fri, Jul 11, 2014 at 2:14 PM, Chen Wang
<[4]chen.apache.s...@gmail.com> wrote:

here you go:
[5]https://gist.github.com/cynosureabu/b317646d5c475d0d2e42
Its actually pretty straight forward. The only thing worth of
mention is that I use another thread in the ES bolt to do the
actual query and tuple emit.
Thanks for looking.
Chen



On Fri, Jul 11, 2014 at 1:18 PM, Sam Goodwin
<[6]sam.goodwi...@gmail.com> wrote:

Can you show some code? 200 seconds for 15K puts sounds like
you're not batching.



On Fri, Jul 11, 2014 at 12:47 PM, Chen Wang
<[7]chen.apache.s...@gmail.com> wrote:

typo in previous email
The emit method in the query bolt takes about 200(instead of
20) seconds..



On Fri, Jul 11, 2014 at 11:58 AM, Chen Wang
<[8]chen.apache.s...@gmail.com> wrote:

Hi, Guys,
I have a storm topology, with a single thread bolt querying
large amount of data (From elasticsearch), and emit to a HBase
bolt(10 threads), doing some filtering, then emit to Arvo
bolt.(10threads) The arvo bolt simply emit the tuple to arvo
client, which will be received by two flume node and then sink
into hdfs. I am testing in local mode.

In the query bolt, i am  getting around 15000 entries in a
batch, the query itself takes about 4second, however, he emit
method in the query bolt takes about 20 seconds. Does it mean
that
the downstream bolt(HBaseBolt and Avro bolt) cannot catch up
with the query bolt?

How can I tune my topology to make this process as fast as
possible? I tried to increase the HBase thread to 20 but it
does not see

Re: writing huge amount of data to HDFS

2014-07-11 Thread Harsha
Hi Chen,

  I looked at your code. The first part is inside a
Bolt's execute method ?  and it looks like fetching all the
data (1 per call)  from a elastic search and emitting each
value from inside the execute method which ends when the ES
result set runs out.

It doesn't look like you followed storm's conventions here was
there any reason not use Spout here . A bolt' execute method
gets called for every tuple that's getting passed. Docs on
spout &
bolt [1]https://storm.incubator.apache.org/documentation/Concep
ts.html



from your comment in the code "1 hits per shard will be
returned for each scroll" and if it taking longer  read 1
records from ES I would suggest you to reduce this batch size
". The idea here is you are making quicker calls to ES and
pushing the data downstream and making another call to ES for
the next batch instead of acquiring one big batch in single
call.



 "i am  getting around 15000 entries in a batch, the query
itself takes about 4second, however, he emit method in the
query bolt takes about 20 seconds." Can you try reducing the
batch size here too it looks like the time is taking emitting
15k entries at one go.



  Was there any reason/utility of using flume to write
to hdfs. If not I would recommend
using [2]https://github.com/ptgoetz/storm-hdfs bolt .







On Fri, Jul 11, 2014, at 03:37 PM, Chen Wang wrote:

Here is the output from the ES query bolt:
 "Total execution time for this batch: 179655(millisecond)" is
the call time around .emit. As you can see, to emit 14000
entries, it takes
anytime from 145231 to 18


 INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- total=14000 hits=14000 took=26172
40813 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the new key(hdfs folder) is 2014-07-13_00-00-00
40889 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- Total execution time for this batch: 782
40890 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the current batch has 4000 records
59335 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the total hits are 145861
59335 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- total=28000 hits=14000 took=18033
238920 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the new key(hdfs folder) is 2014-07-14_00-00-00
238990 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- Total execution time for this batch: 179655
238990 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the current batch has 8000 records
257633 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the total hits are 145861
257633 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- total=42000 hits=14000 took=17926
260932 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the new key(hdfs folder) is 2014-07-15_00-00-00
402852 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the new key(hdfs folder) is 2014-07-16_00-00-00
402865 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- Total execution time for this batch: 145231
402865 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the current batch has 2000 records
417427 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the total hits are 145861
417427 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- total=56000 hits=14000 took=13962
417459 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the new key(hdfs folder) is 2014-07-17_00-00-00
417493 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- Total execution time for this batch: 66
417493 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the current batch has 6000 records
429629 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the total hits are 145861
429629 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- total=7 hits=14000 took=12009
441208 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the new key(hdfs folder) is 2014-07-18_00-00-00
744276 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the new key(hdfs folder) is 2014-07-19_00-00-00
744277 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- Total execution time for this batch: 314647
744277 [pool-1-thread-1] INFO
com.walmartlabs.targeting.storm.bolt.ElasticSearchQueryRunner
- the current ba

Re: Storm UI

2014-07-11 Thread Harsha




Storm UI provides metrics about topologies on the cluster and
no.of tuples emitted, transferred and any last known errors
etc..

you can start storm ui by running STORM_HOME/bin/storm ui which
runs daemon at port 8080. If you hover over the table headers
in Storm UI it will show you a text which talks about that
particular value.

If you are trying to add custom metrics to your topology please
refer to this
page [1]http://www.bigdata-cookbook.com/post/72320512609/storm-
metrics-how-to



On Fri, Jul 11, 2014, at 02:38 AM, Benjamin SOULAS wrote:

Hi everyone,

Actually intern for my master's degree, I have to implement
topologies and see what's happening. I am trying to see those
data via Storm UI; My problem is that I don't find enough
documentation on that... I installed the splunk interface, but
I don't know how to implement it on my topologies ... Does the
Metrics interfaces are used for this???

Please I really need help ...

Regards

References

1. http://www.bigdata-cookbook.com/post/72320512609/storm-metrics-how-to


Re: b.s.m.n.Client [INFO] Reconnect

2014-07-10 Thread Harsha
Storm 0mq package is
here [1]https://github.com/ptgoetz/storm-0mq . You need to add
that package in STORM_HOME/lib

and add this config to storm.yaml

storm.messaging.transport: "backtype.storm.messaging.zmq"



On Thu, Jul 10, 2014, at 10:02 AM, Suparno Datta wrote:

anyone here knows how to switch to zmq from netty? Just wanted
to check that once before going down to 0.8.1.



On 10 July 2014 18:46, Suparno Datta
<[2]suparno.da...@gmail.com> wrote:

@Stephan, Worked like a charm. How stupid of me not to change
the local directory.

@Harsha. Didnt solve the original problem :( .

Now getting this ones

#.s.m.n.Client [INFO] Reconnect started for
Netty-Client-cluster1-fos-ThinkPad-T520/10.42.0.21:6700... [11]

and after 30 retries finally the worker crashes

14-07-10 18:44:43 b.s.m.n.Client [INFO] Closing Netty Client
Netty-Client-cluster1-fos-ThinkPad-T520/[3]10.42.0.21:6700
2014-07-10 18:44:43 b.s.m.n.Client [INFO] Waiting for pending
batchs to be sent with
Netty-Client-cluster1-fos-ThinkPad-T520/10.42.0.21:6700...,
timeout: 60ms, pendings: 0
2014-07-10 18:44:43 b.s.util [ERROR] Async loop died!
java.lang.RuntimeException: java.lang.RuntimeException: Client
is being closed, and does not take requests any more
at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(Disrup
torQueue.java:128)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(D
isruptorQueue.java:99)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.disruptor$consume_batch_when_available.invoke(di
sruptor.clj:80)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.disruptor$consume_loop_STAR_$fn__758.invoke(disr
uptor.clj:94)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.util$async_loop$fn__457.invoke(util.clj:431)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
Caused by: java.lang.RuntimeException: Client is being closed,
and does not take requests any more
at backtype.storm.messaging.netty.Client.send(Client.java:194)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.utils.TransferDrainer.send(TransferDrainer.java:
54) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.daemon.worker$mk_transfer_tuples_handler$fn__592
7$fn__5928.invoke(worker.clj:322)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.daemon.worker$mk_transfer_tuples_handler$fn__592
7.invoke(worker.clj:320)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.disruptor$clojure_handler$reify__745.onEvent(dis
ruptor.clj:58)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(Disrup
torQueue.java:125)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
... 6 common frames omitted
2014-07-10 18:44:43 b.s.util [INFO] Halting process: ("Async
loop died!")

Seems 0.8.1 it is.



On 10 July 2014 18:21, Kemper, Stephan
<[4]stephan.kem...@viasat.com> wrote:

We ran into this same problem this week.  The problem isn't
with ZooKeeper, but the local state files in your
${storm.local.dir}.  If you delete the ./localstate directory
there and restart the node, you should be OK again.

More info on the problem was in last month's "v0.9.2-incubating
and .ser files" thread from this mailing list.


Stephan Kemper
ViaSat

From: Harsha <[5]st...@harsha.io>
Reply-To: "[6]user@storm.incubator.apache.org"
<[7]user@storm.incubator.apache.org>
Date: Thursday, July 10, 2014 at 9:15 AM
To: "[8]user@storm.incubator.apache.org"
<[9]user@storm.incubator.apache.org>
Subject: Re: b.s.m.n.Client [INFO] Reconnect

Suparno,
  Old storm data in zookeeper might conflict with newer
versions of storm. I would suggest you to bring down the
topologies and clean zookeeper /storm dir.
-Harsha



On Thu, Jul 10, 2014, at 09:06 AM, Suparno Datta wrote:

okay that got worse. I just downloaded the 0.9.2. and failed to
launch the supervisors (nimbus is running though). You don't
have to don any clean up before you launch the new version
right ?

Anyways the stack trace of the error

014-07-10 18:01:27 b.s.event [ERROR] Error when processing
event
java.lang.RuntimeException: java.io.InvalidClassException:
clojure.lang.APersistentMap; local class incompatible: stream
classdesc serialVersionUID = 270281984708184947, local class
serialVersionUID = 8648225932767613808
at backtype.storm.utils.Utils.deserialize(Utils.java:93)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.utils.LocalState.snapshot(LocalState.java:45)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.utils.LocalState.get(LocalState.java:56)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.daemon.supervisor$sy

Re: b.s.m.n.Client [INFO] Reconnect

2014-07-10 Thread Harsha
Suparno,

  Old storm data in zookeeper might conflict with newer
versions of storm. I would suggest you to bring down the
topologies and clean zookeeper /storm dir.

-Harsha







On Thu, Jul 10, 2014, at 09:06 AM, Suparno Datta wrote:

okay that got worse. I just downloaded the 0.9.2. and failed to
launch the supervisors (nimbus is running though). You don't
have to don any clean up before you launch the new version
right ?

Anyways the stack trace of the error

014-07-10 18:01:27 b.s.event [ERROR] Error when processing
event
java.lang.RuntimeException: java.io.InvalidClassException:
clojure.lang.APersistentMap; local class incompatible: stream
classdesc serialVersionUID = 270281984708184947, local class
serialVersionUID = 8648225932767613808
at backtype.storm.utils.Utils.deserialize(Utils.java:93)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.utils.LocalState.snapshot(LocalState.java:45)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.utils.LocalState.get(LocalState.java:56)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
backtype.storm.daemon.supervisor$sync_processes.invoke(supervis
or.clj:207) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:161)
[clojure-1.5.1.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151)
[clojure-1.5.1.jar:na]
at clojure.core$apply.invoke(core.clj:619)
~[clojure-1.5.1.jar:na]
at clojure.core$partial$fn__4190.doInvoke(core.clj:2396)
~[clojure-1.5.1.jar:na]
at clojure.lang.RestFn.invoke(RestFn.java:397)
~[clojure-1.5.1.jar:na]
at
backtype.storm.event$event_manager$fn__2378.invoke(event.clj:39
) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
Caused by: java.io.InvalidClassException:
clojure.lang.APersistentMap; local class incompatible: stream
classdesc serialVersionUID = 270281984708184947, local class
serialVersionUID = 8648225932767613808
at
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:6
17) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.ja
va:1622) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:
1517) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.ja
va:1622) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:
1517) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.
java:1771) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:13
50) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readObject(ObjectInputStream.java:370
) ~[na:1.7.0_55]
at java.util.HashMap.readObject(HashMap.java:1184)
~[na:1.7.0_55]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_55]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccesso
rImpl.java:57) ~[na:1.7.0_55]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMetho
dAccessorImpl.java:43) ~[na:1.7.0_55]
at java.lang.reflect.Method.invoke(Method.java:606)
~[na:1.7.0_55]
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.ja
va:1017) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java
:1893) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.
java:1798) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:13
50) ~[na:1.7.0_55]
at
java.io.ObjectInputStream.readObject(ObjectInputStream.java:370
) ~[na:1.7.0_55]
at backtype.storm.utils.Utils.deserialize(Utils.java:89)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
... 11 common frames omitted



On 10 July 2014 17:20, Harsha <[1]st...@harsha.io> wrote:

Yes. As per the change
log [2]https://github.com/apache/incubator-storm/blob/v0.9.2-in
cubating/CHANGELOG.md STORM-187 did make the 0.9.2 release.


On Thu, Jul 10, 2014, at 08:11 AM, Suparno Datta wrote:

You think it's fixed in 0.9.2 ?



On 10 July 2014 17:08, Suparno Datta
<[3]suparno.da...@gmail.com> wrote:

I just found that too. Seems it's becuse 0.9.1 usses netty by
default instead of zeromq ( guess thats why it was working with
0.8.1). Presently looking for the configuration parameter by
which i can tell it to use zmq instead of netty. Let me know if
you have any clue. Otherwise i just have to chuck the 0.9.1 and
get back to 0.8.1



On 10 July 2014 17:02, Harsha <[4]st...@harsha.io> wrote:

Hi Suparno,
   It might be because
of [5]https://issues.apache.org/jira/browse/STORM-187. Can you
try using 0.9.2-incubating release.
-Harsha


On Thu, Jul 10, 2014, at 07:38 AM, Suparno Datta wrote:

Hi,

 I am using storm 0.9.1-incubating on a single machine cluster
to run a simple twitter hashtag extractor.  I am using the
Storm-twitter-workshop which i found to be extremely useful.

[6]https://github.com/kantega/storm-twitter-workshop

I have used thi

Re: b.s.m.n.Client [INFO] Reconnect

2014-07-10 Thread Harsha
Yes. As per the change
log [1]https://github.com/apache/incubator-storm/blob/v0.9.2-in
cubating/CHANGELOG.md STORM-187 did make the 0.9.2 release.





On Thu, Jul 10, 2014, at 08:11 AM, Suparno Datta wrote:

You think it's fixed in 0.9.2 ?



On 10 July 2014 17:08, Suparno Datta
<[2]suparno.da...@gmail.com> wrote:

I just found that too. Seems it's becuse 0.9.1 usses netty by
default instead of zeromq ( guess thats why it was working with
0.8.1). Presently looking for the configuration parameter by
which i can tell it to use zmq instead of netty. Let me know if
you have any clue. Otherwise i just have to chuck the 0.9.1 and
get back to 0.8.1



On 10 July 2014 17:02, Harsha <[3]st...@harsha.io> wrote:

Hi Suparno,
   It might be because
of [4]https://issues.apache.org/jira/browse/STORM-187. Can you
try using 0.9.2-incubating release.
-Harsha


On Thu, Jul 10, 2014, at 07:38 AM, Suparno Datta wrote:

Hi,

 I am using storm 0.9.1-incubating on a single machine cluster
to run a simple twitter hashtag extractor.  I am using the
Storm-twitter-workshop which i found to be extremely useful.

[5]https://github.com/kantega/storm-twitter-workshop

I have used this program before with storm 0.8.1 and it ran
like a charm. I might mention that was on a server machine with
quite 2 quad xeon processors.

This time i am trying it on my laptop( i5 , 8GB). But i am
constantly getting this error in the worker log files

2014-07-10 13:01:47 b.s.m.n.Client [INFO] Reconnect ... [24]
2014-07-10 13:01:58 b.s.m.n.Client [INFO] Reconnect ... [25]
2014-07-10 13:02:09 b.s.m.n.Client [INFO] Reconnect ... [26]
2014-07-10 13:02:19 STDIO [ERROR] Jul 10, 2014 1:02:19 PM
org.jboss.netty.channel.DefaultChannelPipeline
WARNING: An exception was thrown by a user handler while
handling an exception event ([id: 0x563f7062] EXCEPTION:
java.net.ConnectException: connection timed out)
java.lang.IllegalArgumentException: timeout value is negative
at java.lang.Thread.sleep(Native Method)
at
backtype.storm.messaging.netty.Client.reconnect(Client.java:94)
at
backtype.storm.messaging.netty.StormClientHandler.exceptionCaug
ht(StormClientHandler.java:118)
at
org.jboss.netty.handler.codec.frame.FrameDecoder.exceptionCaugh
t(FrameDecoder.java:377)
at
org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.j
ava:525)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.processConnect
Timeout(NioClientBoss.java:140)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioCli
entBoss.java:82)
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(Abst
ractNioSelector.java:312)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientB
oss.java:41)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExe
cutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolEx
ecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Now here comes the stranges part. If i declare just one
instance of the hashtag extractor Bolt it fails to get anything
but for more that 1 it does manage to get me a few hashtags
though with quite high latency. Another strange part related to
this machine is if i declare more than 2 supervisor.slots.ports
the program doesnt even launch any more showing some
initialitzation error.

Sorry if i blabbered a lot about the hardware and stuff. But
somehow to me it seemed quite related to the problem. Any sort
of help will be really useful.

Thanks,

Suparno





--
Suparno Datta




--
Suparno Datta

References

1. https://github.com/apache/incubator-storm/blob/v0.9.2-incubating/CHANGELOG.md
2. mailto:suparno.da...@gmail.com
3. mailto:st...@harsha.io
4. https://issues.apache.org/jira/browse/STORM-187
5. https://github.com/kantega/storm-twitter-workshop


Re: b.s.m.n.Client [INFO] Reconnect

2014-07-10 Thread Harsha
Hi Suparno,

   It might be because
of [1]https://issues.apache.org/jira/browse/STORM-187. Can you
try using 0.9.2-incubating release.

-Harsha





On Thu, Jul 10, 2014, at 07:38 AM, Suparno Datta wrote:

Hi,

 I am using storm 0.9.1-incubating on a single machine cluster
to run a simple twitter hashtag extractor.  I am using the
Storm-twitter-workshop which i found to be extremely useful.

[2]https://github.com/kantega/storm-twitter-workshop

I have used this program before with storm 0.8.1 and it ran
like a charm. I might mention that was on a server machine with
quite 2 quad xeon processors.

This time i am trying it on my laptop( i5 , 8GB). But i am
constantly getting this error in the worker log files

2014-07-10 13:01:47 b.s.m.n.Client [INFO] Reconnect ... [24]
2014-07-10 13:01:58 b.s.m.n.Client [INFO] Reconnect ... [25]
2014-07-10 13:02:09 b.s.m.n.Client [INFO] Reconnect ... [26]
2014-07-10 13:02:19 STDIO [ERROR] Jul 10, 2014 1:02:19 PM
org.jboss.netty.channel.DefaultChannelPipeline
WARNING: An exception was thrown by a user handler while
handling an exception event ([id: 0x563f7062] EXCEPTION:
java.net.ConnectException: connection timed out)
java.lang.IllegalArgumentException: timeout value is negative
at java.lang.Thread.sleep(Native Method)
at
backtype.storm.messaging.netty.Client.reconnect(Client.java:94)
at
backtype.storm.messaging.netty.StormClientHandler.exceptionCaug
ht(StormClientHandler.java:118)
at
org.jboss.netty.handler.codec.frame.FrameDecoder.exceptionCaugh
t(FrameDecoder.java:377)
at
org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.j
ava:525)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.processConnect
Timeout(NioClientBoss.java:140)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioCli
entBoss.java:82)
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(Abst
ractNioSelector.java:312)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientB
oss.java:41)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExe
cutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolEx
ecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Now here comes the stranges part. If i declare just one
instance of the hashtag extractor Bolt it fails to get anything
but for more that 1 it does manage to get me a few hashtags
though with quite high latency. Another strange part related to
this machine is if i declare more than 2 supervisor.slots.ports
the program doesnt even launch any more showing some
initialitzation error.

Sorry if i blabbered a lot about the hardware and stuff. But
somehow to me it seemed quite related to the problem. Any sort
of help will be really useful.

Thanks,

Suparno

References

1. https://issues.apache.org/jira/browse/STORM-187
2. https://github.com/kantega/storm-twitter-workshop


Re: Performance Issues with Kafka + Storm + Trident + OpaqueTridentKafkaSpout

2014-07-01 Thread Harsha


Siddharth,

 Kafka and storm scale when you add more nodes.
Although 150msg/sec is not much of traffic to kafka or storm.
>From your config above you have 1 worker and bolt parallelism
is at 50 thats seems very high for 1 worker. I would start at
checking kafka if you are able to read off those messages at a
higher rate than 12 per sec. You can
try kafka-simple-consumer-perf-test.sh under kafka bin dir. Try
reducing the parallelism hint for the bolts or just start a
spout that just read off kafka and emit see how many messages
per sec it can do if it up to the mark than the issue might be
in your bolt execute and also the parallelism of bolt being too
high. Try default config for worker.child.opts and add few
options at a time instead of above config.

-Harsha



On Tue, Jul 1, 2014, at 08:38 PM, Siddharth Banerjee wrote:



We are seeing some performance issues with Kafka + Storm +
Trident + OpaqueTridentKafkaSpout

Mentioned below are our setup details :

Storm Topology :
 Broker broker = Broker.fromString("localhost:9092")
 GlobalPartitionInformation info = new GlobalPartitionInformation()
 if(args[4]){
 int partitionCount = args[4].toInteger()
 for(int i =0;i

Re: error building storm on mac

2014-06-18 Thread Harsha
Yes. you can grab the release packages and install.





On Wed, Jun 18, 2014, at 04:38 PM, Sa Li wrote:

  Thanks, Harsha, I assume I could download the release version on my
  mac, say storm-0.9.0.1 which contains the jars in root directory,
  therefore I do not have to build, is it correct?

  cheers

  Alec

On Jun 18, 2014 4:32 PM, "Harsha" <[1]st...@harsha.io> wrote:

Alec,
  That link talks about older version of storm. You can get the
latest code from here [2]github.com/apache/incubator-storm. Storm
switched maven for building , you can run "mvn clean package" under
latest storm dir to build .
-Harsha.


On Wed, Jun 18, 2014, at 03:13 PM, Sa Li wrote:

Dear all



I try to install storm on mac vy following such link

[3]http://ptgoetz.github.io/blog/2013/11/26/building-storm-on-osx-maver
icks/



but having such error

lein sub install
Reading project from storm-console-logging
Created
/workspace/tools/storm/storm-console-logging/target/storm-console-loggi
ng-0.9.1-incubating-SNAPSHOT.jar
Wrote /workspace/tools/storm/storm-console-logging/pom.xml
Installed jar and pom into local repo.
Reading project from storm-core
java.lang.Exception: Error loading storm-core/project.clj
 at leiningen.core.project$read$fn__4553.invoke (project.clj:827)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke
(protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke
(protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
clojure.lang.RestFn.applyTo (RestFn.java:139)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$apply_task.invoke (main.clj:281)
leiningen.core.main$resolve_and_apply.invoke (main.clj:287)
leiningen.core.main$_main$fn__4295.invoke (main.clj:357)
leiningen.core.main$_main.doInvoke (main.clj:344)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:624)
clojure.main$main_opt.invoke (main.clj:315)
clojure.main$main.doInvoke (main.clj:420)
clojure.lang.RestFn.invoke (RestFn.java:457)
clojure.lang.Var.invoke (Var.java:394)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.Var.applyTo (Var.java:700)
clojure.main.main (main.java:37)
Caused by: clojure.lang.Compiler$CompilerException:
java.lang.IllegalArgumentException: Duplicate keys: :javac-options,
compiling:(/workspace/tools/storm/storm-core/project.clj:17:62)
 at clojure.lang.Compiler.load (Compiler.java:7142)
clojure.lang.Compiler.loadFile (Compiler.java:7086)
clojure.lang.RT$3.invoke (RT.java:318)
leiningen.core.project$read$fn__4553.invoke (project.clj:825)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke
(protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke
(protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.a

Re: error building storm on mac

2014-06-18 Thread Harsha
Alec,

  That link talks about older version of storm. You can get the
latest code from here [1]github.com/apache/incubator-storm. Storm
switched maven for building , you can run "mvn clean package" under
latest storm dir to build .

-Harsha.





On Wed, Jun 18, 2014, at 03:13 PM, Sa Li wrote:

Dear all



I try to install storm on mac vy following such link

[2]http://ptgoetz.github.io/blog/2013/11/26/building-storm-on-osx-maver
icks/



but having such error

lein sub install
Reading project from storm-console-logging
Created
/workspace/tools/storm/storm-console-logging/target/storm-console-loggi
ng-0.9.1-incubating-SNAPSHOT.jar
Wrote /workspace/tools/storm/storm-console-logging/pom.xml
Installed jar and pom into local repo.
Reading project from storm-core
java.lang.Exception: Error loading storm-core/project.clj
 at leiningen.core.project$read$fn__4553.invoke (project.clj:827)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke
(protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke
(protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
clojure.lang.RestFn.applyTo (RestFn.java:139)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$apply_task.invoke (main.clj:281)
leiningen.core.main$resolve_and_apply.invoke (main.clj:287)
leiningen.core.main$_main$fn__4295.invoke (main.clj:357)
leiningen.core.main$_main.doInvoke (main.clj:344)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:624)
clojure.main$main_opt.invoke (main.clj:315)
clojure.main$main.doInvoke (main.clj:420)
clojure.lang.RestFn.invoke (RestFn.java:457)
clojure.lang.Var.invoke (Var.java:394)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.Var.applyTo (Var.java:700)
clojure.main.main (main.java:37)
Caused by: clojure.lang.Compiler$CompilerException:
java.lang.IllegalArgumentException: Duplicate keys: :javac-options,
compiling:(/workspace/tools/storm/storm-core/project.clj:17:62)
 at clojure.lang.Compiler.load (Compiler.java:7142)
clojure.lang.Compiler.loadFile (Compiler.java:7086)
clojure.lang.RT$3.invoke (RT.java:318)
leiningen.core.project$read$fn__4553.invoke (project.clj:825)
leiningen.core.project$read.invoke (project.clj:824)
leiningen.core.project$read.invoke (project.clj:834)
leiningen.sub$apply_task_to_subproject.invoke (sub.clj:9)
leiningen.sub$run_subproject.invoke (sub.clj:15)
clojure.lang.AFn.applyToHelper (AFn.java:165)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:628)
clojure.core$partial$fn__4230.doInvoke (core.clj:2470)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.ArrayChunk.reduce (ArrayChunk.java:63)
clojure.core.protocols/fn (protocols.clj:98)
clojure.core.protocols$fn__6057$G__6052__6066.invoke
(protocols.clj:19)
clojure.core.protocols$seq_reduce.invoke (protocols.clj:31)
clojure.core.protocols/fn (protocols.clj:60)
clojure.core.protocols$fn__6031$G__6026__6044.invoke
(protocols.clj:13)
clojure.core$reduce.invoke (core.clj:6289)
leiningen.sub$sub.doInvoke (sub.clj:25)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.core$apply.invoke (core.clj:626)
leiningen.core.main$partial_task$fn__4230.doInvoke (main.clj:234)
clojure.lang.RestFn.applyTo (RestFn.java:139)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invoke (core.clj:626)
leiningen

Re: HI,what is stormcode.ser?

2014-06-16 Thread Harsha
Hi Jie,

   stormcode.ser contains a serialized json of uploaded topology.
It contains all the components(spouts,bolts) ,component config ,
component parallelism.

-Harsha





On Mon, Jun 16, 2014, at 04:40 AM, jie liu wrote:

thanks


Re: FileNotFound: heartbeats (too many open files)

2014-06-10 Thread Harsha
It could be related to ulimit on your machines. A good number to start
around is 65000 for ulimit.





On Tue, Jun 10, 2014, at 10:40 AM, Sean Allen wrote:

On a 0.9.0.1 cluster.

Everything was fine until last week. No changes were made and we now
regularly have nodes dying where we end up with the following
exception. Note, number of open files is really low, we aren't out of
file handles. Has anyone else encountered this?

2014-06-10 13:34:04 b.s.d.worker [ERROR] Error when processing event
java.io.FileNotFoundException:
/opt/storm/var/storm/workers/b9ec5518-9430-4275-9844-e2f6e203e3ce/heart
beats/1402421644201 (Too many open files)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_17]
at java.io.FileOutputStream.(FileOutputStream.java:212)
~[na:1.7.0_17]
at java.io.FileOutputStream.(FileOutputStream.java:165)
~[na:1.7.0_17]
at org.apache.commons.io.FileUtils.openOutputStream(FileUtils.java:179)
~[commons-io-1.4.jar:1.4]
at
org.apache.commons.io.FileUtils.writeByteArrayToFile(FileUtils.java:128
2) ~[commons-io-1.4.jar:1.4]
at backtype.storm.utils.LocalState.persist(LocalState.java:69)
~[storm-core-0.9.0.1.jar:na]
at backtype.storm.utils.LocalState.put(LocalState.java:49)
~[storm-core-0.9.0.1.jar:na]
at backtype.storm.daemon.worker$do_heartbeat.invoke(worker.clj:51)
~[storm-core-0.9.0.1.jar:na]
at
backtype.storm.daemon.worker$fn__5882$exec_fn__1229__auto5883$heart
beat_fn__5884.invoke(worker.clj:339) ~[storm-core-0.9.0.1.jar:na]
at
backtype.storm.timer$schedule_recurring$this__3019.invoke(timer.clj:77)
~[storm-core-0.9.0.1.jar:na]
at backtype.storm.timer$mk_timer$fn__3002$fn__3003.invoke(timer.clj:33)
~[storm-core-0.9.0.1.jar:na]
at backtype.storm.timer$mk_timer$fn__3002.invoke(timer.clj:26)
[storm-core-0.9.0.1.jar:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:722) [na:1.7.0_17]

--

Ce n'est pas une signature


Re: [VOTE] Storm Logo Contest - Final Round

2014-06-09 Thread Harsha
#9 - 5 points.



On Mon, Jun 9, 2014, at 11:38 AM, P. Taylor Goetz wrote:

This is a call to vote on selecting the winning Storm logo from the 3
finalists.



The three candidates are:



 * [No. 6 - Alec
Bartos]([1]http://storm.incubator.apache.org/2014/04/23/logo-abartos.ht
ml)
 * [No. 9 - Jennifer
Lee]([2]http://storm.incubator.apache.org/2014/04/29/logo-jlee1.html)
 * [No. 10 - Jennifer
Lee]([3]http://storm.incubator.apache.org/2014/04/29/logo-jlee2.html)

VOTING

Each person can cast a single vote. A vote consists of 5 points that
can be divided among multiple entries. To vote, list the entry number,
followed by the number of points assigned. For example:

#1 - 2 pts.
#2 - 1 pt.
#3 - 2 pts.

Votes cast by PPMC members are considered binding, but voting is open
to anyone. In the event of a tie vote from the PPMC, votes from the
community will be used to break the tie.

This vote will be open until Monday, June 16 11:59 PM UTC.

- Taylor

  Email had 1 attachment:
  * signature.asc
  *   1k (application/pgp-signature)

References

1. http://storm.incubator.apache.org/2014/04/23/logo-abartos.html
2. http://storm.incubator.apache.org/2014/04/29/logo-jlee1.html
3. http://storm.incubator.apache.org/2014/04/29/logo-jlee2.html


Re: Overriding execute method in ShellBolt

2014-06-07 Thread Harsha
   I am not sure if shellbolt is the right way to go here. With
shellbolt it allows you to write your processing logic in python or
ruby. Since shellbolt implements the execute method which puts the
incoming tuples into processing queue which is taken by your python or
ruby script does some processing and emits the tuple. overriding
shellbolt execute is not a good idea. If you can push your processing
logic into python script and use shellbolt or implement a IRichBolt
from execute method call your python script catch output from that. You
can probably reuse
ShellProcess [1]http://nathanmarz.github.io/storm/doc/backtype/storm/ut
ils/ShellProcess.html.

-Harsha





On Sat, Jun 7, 2014, at 09:31 PM, adiya n wrote:

I tried out ShellBolt examples and it works like a charm.  I went
through the multi-lang protocol doc as well and understand it at a high
level.

Now what I dont understand is the following:
-  With a shell bolt, how can you get the output of the external
process (say python process) and do something with it and
then emit the tuple from the Java code?
-  This should be possible but somehow I have to make sure there is
only one emit that is happening from my shellbolt.

1. My Shellbolt gets the tuple
2. I then pass the data to the external python process
3. get the result/tuple from python process
4. Do something else with it in my java code and then emit the tuple to
downstream bolt
How would I be able to do this?  Any examples/pointers would really
help. So the flow would be:


public static class SomeBolt extends ShellBolt implements IRichBolt {
public SomeBolt(){
super("python", "some.py");
}
@Override
public void declareOutputFields(OutputFieldsDeclarer
declarer) {
declarer.declare(new Fields("someData"));
}
}

thanks
Aditya

References

1. http://nathanmarz.github.io/storm/doc/backtype/storm/utils/ShellProcess.html


Re: Topology acked/emitted count reset

2014-06-02 Thread Harsha
Hi Andrew,

  From what I read in the code executor.clj (worker) is
responsible for updating the stats for bolts and spouts . If a worker
is restarted or it might be the case if a topology is rebalanced there
is a chance of loosing the stats.  Topology stats derived from spouts
and bolts there is no stats kept track for topology itself. So if a
worker / supervisor died and restarted on another node stats for that
supervisor/workers are lost.

Thanks,

Harsha





On Mon, Jun 2, 2014, at 07:07 PM, Andrew Montalenti wrote:

Attached you'll find two screenshots from the Storm UI, one taken this
morning, and one taken just recently. The Storm topology in question --
"cass" -- was not restarted in between.

You can see the uptime is 13h (storm_ui_healthy.png) and 26h
(storm_ui_num_reset.png), respectively. Yet, notice that in the later
screenshot, the "acked" counter for the "all-time" window has dropped
from 27.2 million to 3.9 million. All the other counts have also
dropped.

What explains this? Shouldn't alltime emit/ack counts for a topology
that's been running 26h non-stop always be greater than the same
topology 13h earlier?

This is with Storm 0.9.1-incubating.

---
Andrew Montalenti
Co-Founder & CTO
[1]http://parse.ly

  Email had 2 attachments:
  * storm_ui_num_reset.png
  *   566k (image/png)
  * storm_ui_healthy.png
  *   541k (image/png)

References

1. http://parse.ly/


Re: Explicitly Fail Tuple for Replay?

2014-05-31 Thread Harsha


Phil,

You can do collector.fail(tuple)

[1]http://storm.incubator.apache.org/apidocs/backtype/storm/task/Output
Collector.html#fail%28backtype.storm.tuple.Tuple%29

-Harsha





On Sat, May 31, 2014, at 04:57 AM, Phil Burress wrote:

Is there a way to explicitly fail a tuple for replay later? Or do I
have to just let it time out for storm to replay it? Does throwing a
backtype.storm.topology.FailedException allow storm to replay a tuple?

Thanks!

-Phil

References

1. 
http://storm.incubator.apache.org/apidocs/backtype/storm/task/OutputCollector.html#fail%28backtype.storm.tuple.Tuple%29


Re: Fwd: Running word count in Local cluster using Apache Storm

2014-05-30 Thread Harsha
Not sure about the eclipse but I would recommend to import as maven
project from eclipse.





On Fri, May 30, 2014, at 08:19 AM, Neil Shah wrote:

Hi,

Yes. You are correct. I tried using Storm 0.9.0, but still received
same error. Thanks for your help.
I will try in Ubuntu and see if it works.

Can you please tell me about the initial error that i got?
What i did was i created a separate Maven project in Eclipse. I copied
corresponding files (spouts, bolts,pom and main file) from the
downloaded project from the above link to the corresponding location
and ran the mvn commands.
It should have worked. Not sure why it is throwing errors.

[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3:java (def
ault-cli) on project stormArtifact: The parameters 'mainClass' for goal org.code
haus.mojo:exec-maven-plugin:1.3:java are missing or invalid -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please rea
d the following articles:
[ERROR] [Help 1] [1]http://cwiki.apache.org/confluence/display/MAVEN/PluginParam
ete
rException



Thanks,
Neil Shah



On Fri, May 30, 2014 at 10:32 PM, Harsha <[2]st...@harsha.io> wrote:

>From the logs it seems to me the issue is with zookeeper not releasing
lock on log files and storm trying to cleanup the logs. Its a known
issue for zookeeper in windows. If you can try upgrading to 0.9.1 but I
don't think that will fix it though.
[3]https://issues.apache.org/jira/browse/STORM-280?filter=-2.
-Harsha.


On Fri, May 30, 2014, at 07:12 AM, Neil Shah wrote:

Hi,

Thanks for the input. I did run the command as suggested. I get below
exception. I am running command as administrator using Windows 7.

A separate question - Does Storm 0.7.1 which is written in original POm
file in downloads, support Windows?


Exception that i got was :-

[ERROR] Failed to execute goal
org.codehaus.mojo:exec-maven-plugin:1.3:java (def
ault-cli) on project Getting-Started: An exception occured while
executing the J
ava class. null: InvocationTargetException: Unable to delete file:
C:\Users\user12
~1\AppData\Local\Temp\3deb39d5-e76a-492b-b7ac-22ce57fdba3c\version-2\lo
g.1 ->
[Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to
execute goal o
rg.codehaus.mojo:exec-maven-plugin:1.3:java (default-cli) on project
Getting-Sta
rted: An exception occured while executing the Java class. null
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:216)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:153)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:145)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:108)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:76)
at
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThre
adedBuilder.build(SingleThreadedBuilder.java:51)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
eStarter.java:116)
at
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
cher.java:289)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
a:229)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
uncher.java:415)
at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
356)
Caused by: org.apache.maven.plugin.MojoExecutionException: An exception
occured
while executing the Java class. null
at
org.codehaus.mojo.exec.ExecJavaMojo.execute(ExecJavaMojo.java:345)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(Default
BuildPluginManager.java:133)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:208)
... 19 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcc

Re: Fwd: Running word count in Local cluster using Apache Storm

2014-05-30 Thread Harsha
>From the logs it seems to me the issue is with zookeeper not releasing
lock on log files and storm trying to cleanup the logs. Its a known
issue for zookeeper in windows. If you can try upgrading to 0.9.1 but I
don't think that will fix it though.

[1]https://issues.apache.org/jira/browse/STORM-280?filter=-2.

-Harsha.





On Fri, May 30, 2014, at 07:12 AM, Neil Shah wrote:

Hi,

Thanks for the input. I did run the command as suggested. I get below
exception. I am running command as administrator using Windows 7.

A separate question - Does Storm 0.7.1 which is written in original POm
file in downloads, support Windows?


Exception that i got was :-

[ERROR] Failed to execute goal
org.codehaus.mojo:exec-maven-plugin:1.3:java (def
ault-cli) on project Getting-Started: An exception occured while
executing the J
ava class. null: InvocationTargetException: Unable to delete file:
C:\Users\user12
~1\AppData\Local\Temp\3deb39d5-e76a-492b-b7ac-22ce57fdba3c\version-2\lo
g.1 ->
[Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to
execute goal o
rg.codehaus.mojo:exec-maven-plugin:1.3:java (default-cli) on project
Getting-Sta
rted: An exception occured while executing the Java class. null
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:216)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:153)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:145)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:108)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:76)
at
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThre
adedBuilder.build(SingleThreadedBuilder.java:51)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
eStarter.java:116)
at
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
cher.java:289)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
a:229)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
uncher.java:415)
at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
356)
Caused by: org.apache.maven.plugin.MojoExecutionException: An exception
occured
while executing the Java class. null
at
org.codehaus.mojo.exec.ExecJavaMojo.execute(ExecJavaMojo.java:345)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(Default
BuildPluginManager.java:133)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:208)
... 19 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Unable to delete file:
C:\Users\RAJESH~1\AppData
\Local\Temp\3deb39d5-e76a-492b-b7ac-22ce57fdba3c\version-2\log.1
at
org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:1390)
at
org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1044)
at
org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:977)
at
org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:1381)
at
org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1044)
at
org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:977)
at
org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:1381)
at backtype.storm.util$rmr.invoke(util.clj:307)
at
backtype.storm.testing$kill_local_storm_cluster.invoke(testing.clj:16
4)
at
backtype.storm.LocalCluster$_shutdown.invoke(LocalCluster.clj:21)
at backtype.storm.LocalCluster.shutdown(Unknown Source)
at TopologyMain.main(TopologyMain.java:30)
... 6 more



On Fri, May 30, 2014 at 9:04 PM, Ha

Re: Fwd: Running word count in Local cluster using Apache Storm

2014-05-30 Thread Harsha
Hi Neil,

I did the following


~/Downloads/storm-book-examples-ch02-getting_started-8e42636 ⮀

» mvn clean package

» mvn exec:java -Dexec.mainClass="TopologyMain"
-Dexec.args="src/main/resources/words.txt"

-- Word Counter [word-counter-2] --

really: 1

but: 1

application: 1

is: 2

great: 2

are: 1

test: 1

simple: 1

an: 1

powerfull: 1

storm: 3

very: 1

I was able to run the TopologyMain.

This is with

mvn --version

Apache Maven 3.2.1 (ea8b2b07643dbb1b84b6d16e1f08391b666bc1e9;
2014-02-14T09:37:52-08:00)

Maven home: /usr/local/Cellar/maven/3.2.1/libexec

Java version: 1.6.0_65, vendor: Apple Inc.

Java home:
/System/Library/Java/JavaVirtualMachines/[1]1.6.0.jdk/Contents/Home

Default locale: en_US, platform encoding: MacRoman

OS name: "mac os x", version: "10.9.2", arch: "x86_64", family: "mac"





Can you run the following

mvn -X exec:java -Dexec.mainClass="TopologyMain"
-Dexec.args="src/main/resources/words.txt"



If you see java.lang.ClassNotFoundException: TopologyMain

build the package by mvn clean package.

-Harsha





On Fri, May 30, 2014, at 05:20 AM, Neil Shah wrote:


Hi,

I am following book " Getting started with Storm"
[2]http://my.safaribooksonline.com/9781449324025?iid=2013-12-blog-storm
-book-9781449324025-SBOBlog

They have specified Spouts and Bolts at following link
[3]https://github.com/storm-book/examples-ch02-getting_started/zipball/
master

When i try to run the topology using maven command
mvn exec:java -Dexec.mainClass="TopologyMain" -Dexec.args="src/main/resources/wo
rds.txt"

where TopologyMain is the main class name
It is throwing me following error

[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3:java (def
ault-cli) on project stormArtifact: The parameters 'mainClass' for goal org.code
haus.mojo:exec-maven-plugin:1.3:java are missing or invalid -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please rea
d the following articles:
[ERROR] [Help 1] [4]http://cwiki.apache.org/confluence/display/MAVEN/PluginParam
ete
rException


My pom.xml is as below

http://maven.apache.org/POM/4.0.0";
xmlns:xsi="[6]http://www.w3.org/2001/XMLSchema-instance";
xsi:schemaLocation="[7]http://maven.apache.org/POM/4.0.0[8]http://maven.apache.o
rg/xsd/maven-4.0.0.xsd">
4.0.0
stormGroup
stormArtifact
0.0.1-SNAPSHOT



 org.apache.maven.plugins
 maven-compiler-plugin
 2.3.2
 
 1.7
 1.7
 1.7
 






[9]clojars.org
[10]http://clojars.org/repo





storm
storm
0.9.0





Can anybody help me with the issue? Let me know if you need any more
details

--
Thanks & Regards
Neil Shah

References

1. http://1.6.0.jdk/Contents/Home
2. 
http://my.safaribooksonline.com/9781449324025?iid=2013-12-blog-storm-book-9781449324025-SBOBlog
3. https://github.com/storm-book/examples-ch02-getting_started/zipball/master
4. http://cwiki.apache.org/confluence/display/MAVEN/PluginParamete
5. http://maven.apache.org/POM/4.0.0
6. http://www.w3.org/2001/XMLSchema-instance
7. http://maven.apache.org/POM/4.0.0
8. http://maven.apache.org/xsd/maven-4.0.0.xsd
9. http://clojars.org/
  10. http://clojars.org/repo


Re: Building Storm

2014-05-28 Thread Harsha
I assume you are using oracle jdk. I tested on ubuntu 12.04  with

maven 3.2.1, git 1.7.9 , java 1.7.0_55, python 2.7.3, ruby 1.8.7.



On Wed, May 28, 2014, at 08:01 AM, Justin Workman wrote:

On Ubuntu 12.04 I have tried with Maven 3.0.4 and now the latest
3.2.1.



On Tue, May 27, 2014 at 5:35 PM, P. Taylor Goetz <[1]ptgo...@gmail.com>
wrote:

I'll do a couple tests, but for the most part it should just work on
OSX, etc. (Storm releases are built on OSX).



What version of maven are you using? Have you tried with the latest
version?



-Taylor




> On May 27, 2014, at 5:54 PM, Przemek Grzędzielski
<[2]przemo.grzedziel...@gmail.com> wrote:
>
> Hi guys,
>
> got exactly the same results trying to build storm (exactly the
commands as mentioned).
> Tried on: Xubuntu 12.04.4 and OS X Mavericks 10.9.2.
> Would be great to know what's the cause of this issue :-/

References

1. mailto:ptgo...@gmail.com
2. mailto:przemo.grzedziel...@gmail.com


Re: Position in Kafka Stream

2014-05-27 Thread Harsha
Hi Tyson,
 Yes kafka trident has offset metric and kafkaFetchAvg,
 kafkaFetchMax
https://github.com/apache/incubator-storm/blob/master/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaEmitter.java#L64
-Harsha

On Tue, May 27, 2014, at 06:55 PM, Tyson Norris wrote:
> Do Trident variants of kafka spouts do something similar?
> Thanks
> Tyson
> 
> > On May 27, 2014, at 3:19 PM, "Harsha"  wrote:
> > 
> > Raphael,
> >kafka spout sends metrics for kafkaOffset and kafkaPartition you can 
> > look at those by using LoggingMetrics or setting up a ganglia. Kafka uses 
> > its own zookeeper to store state info per topic & group.id you can look at 
> > kafka offsets using 
> > kafka/bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
> > -Harsha
> >  
> >  
> >> On Tue, May 27, 2014, at 03:01 PM, Raphael Hsieh wrote:
> >> Is there a way to tell where in the kafka stream my topology is starting 
> >> from?
> >> From my understanding Storm will use zookeeper in order to tell its place 
> >> in the Kafka stream. Where can I find metrics on this ?
> >> How can I see how large the stream is? What how much data is sitting in 
> >> the stream and what the most recent/oldest position is?
> >>  
> >> Thanks
> >>  
> >> -- 
> >> Raphael Hsieh


Re: Position in Kafka Stream

2014-05-27 Thread Harsha
Raphael,

   kafka spout sends metrics for kafkaOffset and kafkaPartition you
can look at those by using LoggingMetrics or setting up a ganglia.
Kafka uses its own zookeeper to store state info per topic & group.id
you can look at kafka offsets using

kafka/bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker

-Harsha





On Tue, May 27, 2014, at 03:01 PM, Raphael Hsieh wrote:

Is there a way to tell where in the kafka stream my topology is
starting from?
>From my understanding Storm will use zookeeper in order to tell its
place in the Kafka stream. Where can I find metrics on this ?
How can I see how large the stream is? What how much data is sitting in
the stream and what the most recent/oldest position is?

Thanks

--
Raphael Hsieh


Re: Accessing taskid of a bolt in python

2014-05-26 Thread Harsha


Looks like its a bug

[1]https://issues.apache.org/jira/browse/STORM-66

There is a patch available.

Thanks,

Harsha



On Mon, May 26, 2014, at 03:41 PM, Dilpreet Singh wrote:

Hi,

I've initialized the bolt like this:

def initialize(self, stormconf, context):
self.stormconf = stormconf
self.context = context

However, contrary to
what [2]https://github.com/nathanmarz/storm/wiki/Multilang-protocol say
s, 'context' does not contain the task id of the bolt.

The context object looks like this:

{"task->component":{"13":"spout","11":"idfvectorizer","12":"idfvectoriz
er","3":"idfvectorizer","2":"clusterer","10":"idfvectorizer","1":"__ack
er","7":"idfvectorizer","6":"idfvectorizer","5":"idfvectorizer","4":"id
fvectorizer","9":"idfvectorizer","8":"idfvectorizer"}}

But does not contain the taskid parameter.

Please help.

Regards,

Dilpreet

References

1. https://issues.apache.org/jira/browse/STORM-66
2. https://github.com/nathanmarz/storm/wiki/Multilang-protocol


Re: Nimbus UI fields

2014-05-20 Thread Harsha
Executed refers to number of incoming tuples processed.

capacity is determined by (executed * latency) / window (time duration).

UI should give you description of those stats if you hover over table
headers.







On Tue, May 20, 2014, at 03:36 PM, Raphael Hsieh wrote:

I reattached the previous image in case it was too difficult to read
before


On Tue, May 20, 2014 at 3:31 PM, Raphael Hsieh
<[1]raffihs...@gmail.com> wrote:

Hi I'm confused as to what each field in the StormUI represents and how
to use the information.
Inline image 1

The bolts I have above are formed from trident. This is what operations
I believe each bolt represents
b-0 : .each(function) -> .each(filter)
b-1 : .aggregate
--split--
b-2 : .persistentAggregate
b-3 : .persistentAggregate

What does it mean for the first two bolts to emit and transfer 0 ?
What is the Capacity field ? What does that represent ?
Does Execute refer to the tuples acked and successfully processed?

Thanks
--
Raphael Hsieh






--
Raphael Hsieh



  Email had 2 attachments:
  * image.png
  *   41k (image/png)
  * NimbusUI.PNG
  *   22k (image/png)

References

1. mailto:raffihs...@gmail.com


Re: unable to install/test incubator-storm/examples missing dependencies

2014-05-14 Thread Harsha
incase if you haven't done already can you do mvn clean install under
storm-starter.

-Harsha






On Mon, May 12, 2014, at 11:52 AM, Thomas Puthiaparambil wrote:

I get the following error
[root@localhost storm-starter]# mvn compile exec:java
-Dstorm.topology=storm.starter.WordCountTopology
[INFO] Scanning for projects...
[INFO]
[INFO] Using the builder
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThread
edBuilder with a thread count of 1
[INFO]
[INFO]
---
-
[INFO] Building storm-starter 0.9.2-incubating-SNAPSHOT
[INFO]
---
-
[WARNING] The POM for
org.apache.storm:storm-core:jar:0.9.2-incubating-SNAPSHOT is missing,
no dependency information available
[INFO]
---
-
[INFO] BUILD FAILURE
[INFO]
---
-
[INFO] Total time: 2.084 s
[INFO] Finished at: 2014-05-12T09:45:25-08:00
[INFO] Final Memory: 12M/91M
[INFO]
---
-
[ERROR] Failed to execute goal on project storm-starter: Could not
resolve dependencies for project
org.apache.storm:storm-starter:jar:0.9.2-incubating-SNAPSHOT: Failure
to find org.apache.storm:storm-core:jar:0.9.2-incubating-SNAPSHOT
in [1]https://clojars.org/repo/ was cached in the local repository,
resolution will not be reattempted until the update interval of clojars
has elapsed or updates are forced -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with
the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help
1] [2]http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolu
tionException

References

1. https://clojars.org/repo/
2. 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException


Re: Logging levels

2014-04-21 Thread Harsha
I don't think you can change logging levels per topology at this point.
Take a look at $STORM_HOME/logback/cluster.xml it gets passed to a
worker as logback.configurationFile by supervisor.

-Harsha.
On Mon, Apr 21, 2014, at 09:42 AM, Software Dev wrote:
> Is there any way to adjust this per topology or project as opposed to
> system wide?
> 
> On Sun, Apr 20, 2014 at 11:23 PM, 朱春来  wrote:
> > Try to modify the property file of log4j which is in the $STROM_HOME/log4j
> >
> >
> > 2014-04-19 6:59 GMT+08:00 Software Dev :
> >
> >> How can one change the log levels.. the output is insane!
> >
> >
> >
> >
> > --
> > Thanks,
> >
> > Chunlai


Re: getting no class def fpund error when trying to run test storm locally

2014-04-15 Thread Harsha
David,

   looks like that article is old. Follow these instructions for
running storm on windows.

[1]http://ptgoetz.github.io/blog/2013/12/18/running-apache-storm-on-win
dows/

-Harsha



On Tue, Apr 15, 2014, at 06:57 AM, David Novogrodsky wrote:

First, thanks for your help.  This list is great!!

I am working on a Windows 7 system.  I have been able to compile my
test Storm project.  I am having some problem running the project
locally, i.e. not on a cluster.

When I try to run it locally, I get this error:

C:\Users\david.j.novogrodsky\Documents\GitHub\storm-simple\target>java
-jar storm-simple-1.0-SNAPSHOT.jar
java.lang.NoClassDefFoundError: backtype/storm/topology/IRichSpout
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Unknown Source)
at java.lang.Class.getMethod0(Unknown Source)
at java.lang.Class.getMethod(Unknown Source)
at sun.launcher.LauncherHelper.getMainMethod(Unknown Source)
at sun.launcher.LauncherHelper.checkAndLoadMain(Unknown Source)
Caused by: java.lang.ClassNotFoundException:
backtype.storm.topology.IRichSpout
at java.net.URLClassLoader$1.run(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
... 6 more
Exception in thread "main"

I got these run instructions from
here: [2]http://www.javaworld.com/article/2078672/open-source-tools/ope
n-source-java-projects-storm.html?page=2

David Novogrodsky
[3]david.novogrod...@gmail.com
[4]http://www.linkedin.com/in/davidnovogrodsky

References

1. http://ptgoetz.github.io/blog/2013/12/18/running-apache-storm-on-windows/
2. 
http://www.javaworld.com/article/2078672/open-source-tools/open-source-java-projects-storm.html?page=2
3. mailto:david.novogrod...@gmail.com
4. http://www.linkedin.com/in/davidnovogrodsky


Re: compiling storm code with maven and using storm client

2014-04-15 Thread Harsha


Xing,

 Do you have ruby 1.9.3 and python installed. I was able to
build using java 1.6.0_37 on centos 6.5

but you need ruby and python for the build as multilang tests uses
them. If you still seeing issues with build please attach "mvn -X clean
package" output.

-Harsha



On Mon, Apr 14, 2014, at 06:29 PM, Xing Yong wrote:

thanks Harsha for your reply.   My platform details:

linux: CentOS release 6.3 (Final)

mvn --version :
Apache Maven 3.0.4 (r1232337; 2012-01-17 16:44:56+0800)
Maven home: /opt/soft/apache-maven-3.0.4
Java version: 1.6.0_37, vendor: Sun Microsystems Inc.
Java home: /opt/soft/jdk1.6.0_37/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-279.23.1.mi3.el6.x86_64", arch:
"amd64", family: "unix"

java -version:
java version "1.6.0_37"
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)

 I open the debug pattern for maven, but not found useful infor



2014-04-14 21:53 GMT+08:00 Harsha <[1]st...@harsha.io>:

Xing,
Can you share your platform details . Are you compiling on windows
or linux , maven, java versions etc.
-Harsha


On Mon, Apr 14, 2014, at 05:32 AM, David Crossland wrote:

I'm not sure about 1

But 2, you can just copy any topology dependant jars to storms lib
directory

D

From: [2]Xing Yong
Sent: Monday, 14 April 2014 12:25
To: [3]user@storm.incubator.apache.org

1. when I compile the storm-0.9.1 release source code with maven,
always get this error, how to fix it ? thank you.


Compiling backtype.storm.ui.core to
/home/yongxing/infra-git/storm-0.9.1-incubating/storm-core/target/class
es
[INFO]
---
-
[INFO] Reactor Summary:
[INFO]
[INFO] Storm . SUCCESS
[3:53.213s]
[INFO] maven-shade-clojure-transformer ... SUCCESS
[4.043s]
[INFO] Storm Core  FAILURE
[1:57.586s]
[INFO]
---
-
[INFO] BUILD FAILURE
[INFO]
---
-
[INFO] Total time: 5:58.106s
[INFO] Finished at: Mon Apr 14 12:28:08 CST 2014
[INFO] Final Memory: 27M/2030M
[INFO]
---
-
[ERROR] Failed to execute goal
com.theoryinpractise:clojure-maven-plugin:1.3.18:compile
(compile-clojure) on project storm-core: Clojure failed. -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to
execute goal com.theoryinpractise:clojure-maven-plugin:1.3.18:compile
(compile-clojure) on project storm-core: Clojure failed.
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.j
ava:217)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.j
ava:153)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.j
ava:145)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject
(LifecycleModuleBuilder.java:84)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject
(LifecycleModuleBuilder.java:59)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuil
d(LifecycleStarter.java:183)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleS
tarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja
va:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso
rImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launch
er.java:290)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:
230)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Laun
cher.java:409)
at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:35
2)
Caused by: org.apache.maven.plugin.MojoExecutionException: Clojure
failed.
at
com.theoryinpractise.clojure.AbstractClojureCompilerMojo.callClojureWit
h(AbstractClojureCompilerMojo.java:451)
at
com.theoryinpractise.clojure.AbstractClojureCompilerMojo.callClojureWit
h(AbstractClojureCompilerMojo.java:367)
at
com.theoryinpractise.clojure.AbstractClojureCompilerMojo.callClojureWit
h(AbstractClojureCompilerMojo.java:344)
at
com.theoryinpractise.clojure.ClojureCompilerMojo.execute(ClojureCompile
rMojo.java:47)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBu
ildPluginMan

Re: compiling storm code with maven and using storm client

2014-04-14 Thread Harsha
Xing,

Can you share your platform details . Are you compiling on windows
or linux , maven, java versions etc.

-Harsha





On Mon, Apr 14, 2014, at 05:32 AM, David Crossland wrote:

I'm not sure about 1

But 2, you can just copy any topology dependant jars to storms lib
directory

D

From: [1]Xing Yong
Sent: Monday, 14 April 2014 12:25
To: [2]user@storm.incubator.apache.org

1. when I compile the storm-0.9.1 release source code with maven,
always get this error, how to fix it ? thank you.


Compiling backtype.storm.ui.core to
/home/yongxing/infra-git/storm-0.9.1-incubating/storm-core/target/class
es
[INFO]
---
-
[INFO] Reactor Summary:
[INFO]
[INFO] Storm . SUCCESS
[3:53.213s]
[INFO] maven-shade-clojure-transformer ... SUCCESS
[4.043s]
[INFO] Storm Core  FAILURE
[1:57.586s]
[INFO]
---
-
[INFO] BUILD FAILURE
[INFO]
---
-
[INFO] Total time: 5:58.106s
[INFO] Finished at: Mon Apr 14 12:28:08 CST 2014
[INFO] Final Memory: 27M/2030M
[INFO]
---
-
[ERROR] Failed to execute goal
com.theoryinpractise:clojure-maven-plugin:1.3.18:compile
(compile-clojure) on project storm-core: Clojure failed. -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to
execute goal com.theoryinpractise:clojure-maven-plugin:1.3.18:compile
(compile-clojure) on project storm-core: Clojure failed.
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.j
ava:217)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.j
ava:153)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.j
ava:145)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject
(LifecycleModuleBuilder.java:84)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject
(LifecycleModuleBuilder.java:59)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuil
d(LifecycleStarter.java:183)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleS
tarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja
va:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso
rImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launch
er.java:290)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:
230)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Laun
cher.java:409)
at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:35
2)
Caused by: org.apache.maven.plugin.MojoExecutionException: Clojure
failed.
at
com.theoryinpractise.clojure.AbstractClojureCompilerMojo.callClojureWit
h(AbstractClojureCompilerMojo.java:451)
at
com.theoryinpractise.clojure.AbstractClojureCompilerMojo.callClojureWit
h(AbstractClojureCompilerMojo.java:367)
at
com.theoryinpractise.clojure.AbstractClojureCompilerMojo.callClojureWit
h(AbstractClojureCompilerMojo.java:344)
at
com.theoryinpractise.clojure.ClojureCompilerMojo.execute(ClojureCompile
rMojo.java:47)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBu
ildPluginManager.java:101)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.j
ava:209)
... 19 more
[ERROR]
[ERROR]

2. Using storm client, how to submit additional files to server, does
storm client support this operation? for example, i want to transfer
some files from client to server which my topology will use, someone
have any idea to share with ?

thank you

References

1. mailto:xyong...@gmail.com
2. mailto:user@storm.incubator.apache.org