Any Link which can tell extensive details of storm grouping ?

2014-08-25 Thread M.Tarkeshwar Rao
Hello friends,

I need some documentation on storm grouping and how it is internally
working.
Can you please help me?

Regards
tarkeshwar


Storm not processing topology without logs

2014-08-25 Thread Vikas Agarwal
Hi,

I have started to explore the Storm for distributed processing for our use
case which we were earlier fulfilling by JMS based MQ system. Topology
worked after some efforts. It has one spout (KafkaSpout from kafka-storm
project) and 3 bolts. First bolt sets context for other two bolts which in
turn do some processing on the tuples and persist the analyzed results in
some DB (Mongo, Solr, HBase etc).

Recently the topology stopped working. I am able to submit the topology and
it does not throw any error in submitting the topology, however, nimbus.log
or worker-6701.log files are not showing any progress and eventually
topology does not consume any message. I don't have doubt on KafkaSpout
because if it was the culprit, at least some initialization logs of spout
and bolts should have been there in nimbus.log or worker-.log. Isn't it?

Here is the snippet of nimbus.log after uploading the jar to cluster

Uploading file from client to
/hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar
2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from
client:
/hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar
2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for
aleads with conf {topology.max.task.parallelism nil,
topology.acker.executors nil, topology.kryo.register nil,
topology.kryo.decorators (), topology.name aleads, storm.id
aleads-3-1408964869, modelId ut, topology.workers 1,
topology.debug true}
2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads:
aleads-3-1408964869
2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots:
([e56c2cc7-d35a-4355-9906-506618ff70c5 6701]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6700])
2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology
id aleads-3-1408964869:
#backtype.storm.daemon.common.Assignment{:master-code-dir
/hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host
{e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port
{[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1]
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs
{[1 1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6
6] 1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2]
1408964870}}

Can anyone guess what I have done wrong and why Storm is not giving any
error log anywhere.

Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks)
Kafka version is 2.10-0.8.1.1
Storm-Kafka version 0.9.2-incubating

-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax


Re: Storm not processing topology without logs

2014-08-25 Thread Vikas Agarwal
Found the fix. I was stuck in this problem from last 3-4 days and it was
waiting just for joining the storm mailing list to be resolved. :)

I found something in supervisor.log this time, it was dumping jar_UUID
still hasn't started with the actual worker java command which is failing.
However, it was not showing any error. So, I copied the command from the
logs and directly run on console and it showed me the root cause. Somehow,
localhost was get appended to hdp.ambari (which was my host name) and due
to it was not able to find the server to run the command on. :(


On Mon, Aug 25, 2014 at 5:25 PM, Vikas Agarwal vi...@infoobjects.com
wrote:

 Hi,

 I have started to explore the Storm for distributed processing for our use
 case which we were earlier fulfilling by JMS based MQ system. Topology
 worked after some efforts. It has one spout (KafkaSpout from kafka-storm
 project) and 3 bolts. First bolt sets context for other two bolts which in
 turn do some processing on the tuples and persist the analyzed results in
 some DB (Mongo, Solr, HBase etc).

 Recently the topology stopped working. I am able to submit the topology
 and it does not throw any error in submitting the topology, however,
 nimbus.log or worker-6701.log files are not showing any progress and
 eventually topology does not consume any message. I don't have doubt on
 KafkaSpout because if it was the culprit, at least some initialization logs
 of spout and bolts should have been there in nimbus.log or worker-.log.
 Isn't it?

 Here is the snippet of nimbus.log after uploading the jar to cluster

 Uploading file from client to
 /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar
 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from
 client:
 /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar
 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for
 aleads with conf {topology.max.task.parallelism nil,
 topology.acker.executors nil, topology.kryo.register nil,
 topology.kryo.decorators (), topology.name aleads, storm.id
 aleads-3-1408964869, modelId ut, topology.workers 1,
 topology.debug true}
 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads:
 aleads-3-1408964869
 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots:
 ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6700])
 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for
 topology id aleads-3-1408964869:
 #backtype.storm.daemon.common.Assignment{:master-code-dir
 /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host
 {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port
 {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs
 {[1 1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6
 6] 1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2]
 1408964870}}

 Can anyone guess what I have done wrong and why Storm is not giving any
 error log anywhere.

 Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks)
 Kafka version is 2.10-0.8.1.1
 Storm-Kafka version 0.9.2-incubating

 --
 Regards,
 Vikas Agarwal
 91 – 9928301411

 InfoObjects, Inc.
 Execution Matters
 http://www.infoobjects.com
 2041 Mission College Boulevard, #280
 Santa Clara, CA 95054
 +1 (408) 988-2000 Work
 +1 (408) 716-2726 Fax




-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax


Re: Location of last error details seen in storm UI

2014-08-25 Thread Vincent Russell
Click on the link of the bolt/spout that is all the way on the left side.


On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania jason.ka...@ymail.com wrote:

 Hello,

 I am trying to get more detail on an error that is being displayed in the
 Storm UI under the Last Error column but unfortunately, I am not seeing it
 captured anywhere else. Does anyone know where this text could be seen? The
 problem is that the error text is insufficient to diagnose the problem.

 Thanks,

 Jason



Re: Location of last error details seen in storm UI

2014-08-25 Thread Vikas Agarwal
Better would be to view log files under /var/log/storm. Any issue with
worker would be logged into /var/log/storm/worker-6700.log
and /var/log/storm/worker-6701.log.


On Mon, Aug 25, 2014 at 8:00 PM, Vincent Russell vincent.russ...@gmail.com
wrote:

 Click on the link of the bolt/spout that is all the way on the left side.


 On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania jason.ka...@ymail.com
 wrote:

 Hello,

 I am trying to get more detail on an error that is being displayed in the
 Storm UI under the Last Error column but unfortunately, I am not seeing it
 captured anywhere else. Does anyone know where this text could be seen? The
 problem is that the error text is insufficient to diagnose the problem.

 Thanks,

 Jason





-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax


Re: Location of last error details seen in storm UI

2014-08-25 Thread Jason Kania
Thanks for the help.

Unfortunately, the content shown when the spout/bolt is clicked does not show 
any information about any errors.




 From: Vincent Russell vincent.russ...@gmail.com
To: user@storm.incubator.apache.org; Jason Kania jason.ka...@ymail.com 
Sent: Monday, August 25, 2014 10:30:32 AM
Subject: Re: Location of last error details seen in storm UI
 


Click on the link of the bolt/spout that is all the way on the left side.





On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania jason.ka...@ymail.com wrote:

Hello,


I am trying to get more detail on an error that is being displayed in the 
Storm UI under the Last Error column but unfortunately, I am not seeing it 
captured anywhere else. Does anyone know where this text could be seen? The 
problem is that the error text is insufficient to diagnose the problem.


Thanks,


Jason

Re: Location of last error details seen in storm UI

2014-08-25 Thread Jason Kania
Thanks for the response.


Unfortunately, I have no /var/log/storm on my system. Where is the path to 
these logs specified. I am guessing it is pointing somewhere else by default.

Thanks,

Jason




 From: Vikas Agarwal vi...@infoobjects.com
To: user@storm.incubator.apache.org 
Cc: Jason Kania jason.ka...@ymail.com 
Sent: Monday, August 25, 2014 10:34:00 AM
Subject: Re: Location of last error details seen in storm UI
 


Better would be to view log files under /var/log/storm. Any issue with worker 
would be logged into /var/log/storm/worker-6700.log and 
/var/log/storm/worker-6701.log.





On Mon, Aug 25, 2014 at 8:00 PM, Vincent Russell vincent.russ...@gmail.com 
wrote:

Click on the link of the bolt/spout that is all the way on the left side.



On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania jason.ka...@ymail.com wrote:

Hello,


I am trying to get more detail on an error that is being displayed in the 
Storm UI under the Last Error column but unfortunately, I am not seeing it 
captured anywhere else. Does anyone know where this text could be seen? The 
problem is that the error text is insufficient to diagnose the problem.


Thanks,


Jason



-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc. 
Execution Matters
http://www.infoobjects.com 
2041 Mission College Boulevard, #280 
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax 

Re: Location of last error details seen in storm UI

2014-08-25 Thread Harsha
Jason,

   Default is under your storm installation check for logs
dir.

-Harsha





On Mon, Aug 25, 2014, at 07:54 AM, Jason Kania wrote:

Thanks for the response.

Unfortunately, I have no /var/log/storm on my system. Where is
the path to these logs specified. I am guessing it is pointing
somewhere else by default.

Thanks,

Jason
  __

From: Vikas Agarwal vi...@infoobjects.com
To: user@storm.incubator.apache.org
Cc: Jason Kania jason.ka...@ymail.com
Sent: Monday, August 25, 2014 10:34:00 AM
Subject: Re: Location of last error details seen in storm UI

Better would be to view log files under /var/log/storm. Any
issue with worker would be logged into
/var/log/storm/worker-6700.log
and /var/log/storm/worker-6701.log.




On Mon, Aug 25, 2014 at 8:00 PM, Vincent Russell
[1]vincent.russ...@gmail.com wrote:

Click on the link of the bolt/spout that is all the way on the
left side.



On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania
[2]jason.ka...@ymail.com wrote:

Hello,

I am trying to get more detail on an error that is being
displayed in the Storm UI under the Last Error column but
unfortunately, I am not seeing it captured anywhere else. Does
anyone know where this text could be seen? The problem is that
the error text is insufficient to diagnose the problem.

Thanks,

Jason





--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[3]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

References

1. mailto:vincent.russ...@gmail.com
2. mailto:jason.ka...@ymail.com
3. http://www.infoobjects.com/


RE: Storm not processing topology without logs

2014-08-25 Thread Georgy Abraham
Are you able to see the topology in storm UI or with storm list command ?? And 
worker mentioned in the UI doesn't have any log ??

-Original Message-
From: Vikas Agarwal
Sent: 25-08-2014 PM 05:25
To: user@storm.incubator.apache.org
Subject: Storm not processing topology without logs

Hi,



I have started to explore the Storm for distributed processing for our use case 
which we were earlier fulfilling by JMS based MQ system. Topology worked after 
some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 
bolts. First bolt sets context for other two bolts which in turn do some 
processing on the tuples and persist the analyzed results in some DB (Mongo, 
Solr, HBase etc).
 



Recently the topology stopped working. I am able to submit the topology and it 
does not throw any error in submitting the topology, however, nimbus.log or 
worker-6701.log files are not showing any progress and eventually topology does 
not consume any message. I don't have doubt on KafkaSpout because if it was the 
culprit, at least some initialization logs of spout and bolts should have been 
there in nimbus.log or worker-.log. Isn't it?
 



Here is the snippet of nimbus.log after uploading the jar to cluster

 



Uploading file from client to 
/hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar

2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: 
/hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar
 
2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads 
with conf {topology.max.task.parallelism nil, topology.acker.executors nil, 
topology.kryo.register nil, topology.kryo.decorators (), topology.name 
aleads, storm.id aleads-3-1408964869, modelId ut, topology.workers 
1, topology.debug true}
 
2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads: aleads-3-1408964869

2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots: 
([e56c2cc7-d35a-4355-9906-506618ff70c5 6701] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6700])
 
2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology id 
aleads-3-1408964869: #backtype.storm.daemon.common.Assignment{:master-code-dir 
/hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host 
{e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port {[2 
2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1] 
[e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs {[1 
1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6 6] 
1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2] 
1408964870}}
 



Can anyone guess what I have done wrong and why Storm is not giving any error 
log anywhere.




Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks)

Kafka version is 2.10-0.8.1.1
 
Storm-Kafka version 0.9.2-incubating



-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc. 
Execution Matters
http://www.infoobjects.com 
 2041 Mission College Boulevard, #280 
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax
 

RE: Location of last error details seen in storm UI

2014-08-25 Thread Georgy Abraham
The log would be in the corresponding worker node for that bolt or spout . Ssh 
onto the node , inside storm installation directory there are logs like 
worker-6700.log and so on . You can get all the logs from there.

-Original Message-
From: Jason Kania
Sent: 25-08-2014 AM 08:52
To: user@storm.incubator.apache.org
Subject: Location of last error details seen in storm UI


Hello,




I am trying to get more detail on an error that is being displayed in the Storm 
UI under the Last Error column but unfortunately, I am not seeing it captured 
anywhere else. Does anyone know where this text could be seen? The problem is 
that the error text is insufficient to diagnose the problem.




Thanks,




Jason

Re: Storm not processing topology without logs

2014-08-25 Thread Vikas Agarwal
Yes, I was able to see the topology in Storm UI and nothing was logged into
worker logs. However, as I mentioned, I am able to resolve it by finding an
hint in supervisor.log file this time.


On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham itsmegeo...@gmail.com
wrote:

 Are you able to see the topology in storm UI or with storm list command ??
 And worker mentioned in the UI doesn't have any log ??
 --
 From: Vikas Agarwal
 Sent: 25-08-2014 PM 05:25
 To: user@storm.incubator.apache.org
 Subject: Storm not processing topology without logs


 Hi,

 I have started to explore the Storm for distributed processing for our use
 case which we were earlier fulfilling by JMS based MQ system. Topology
 worked after some efforts. It has one spout (KafkaSpout from kafka-storm
 project) and 3 bolts. First bolt sets context for other two bolts which in
 turn do some processing on the tuples and persist the analyzed results in
 some DB (Mongo, Solr, HBase etc).

 Recently the topology stopped working. I am able to submit the topology
 and it does not throw any error in submitting the topology, however,
 nimbus.log or worker-6701.log files are not showing any progress and
 eventually topology does not consume any message. I don't have doubt on
 KafkaSpout because if it was the culprit, at least some initialization logs
 of spout and bolts should have been there in nimbus.log or worker-.log.
 Isn't it?

 Here is the snippet of nimbus.log after uploading the jar to cluster

 Uploading file from client to
 /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar
 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from
 client:
 /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar
 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for
 aleads with conf {topology.max.task.parallelism nil,
 topology.acker.executors nil, topology.kryo.register nil,
 topology.kryo.decorators (), topology.name aleads, storm.id
 aleads-3-1408964869, modelId ut, topology.workers 1,
 topology.debug true}
 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads:
 aleads-3-1408964869
 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots:
 ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6700])
 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for
 topology id aleads-3-1408964869:
 #backtype.storm.daemon.common.Assignment{:master-code-dir
 /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host
 {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port
 {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1]
 [e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs
 {[1 1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6
 6] 1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2]
 1408964870}}

 Can anyone guess what I have done wrong and why Storm is not giving any
 error log anywhere.

 Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks)
 Kafka version is 2.10-0.8.1.1
 Storm-Kafka version 0.9.2-incubating

 --
 Regards,
 Vikas Agarwal
 91 – 9928301411

 InfoObjects, Inc.
 Execution Matters
 http://www.infoobjects.com
 2041 Mission College Boulevard, #280
 Santa Clara, CA 95054
 +1 (408) 988-2000 Work
 +1 (408) 716-2726 Fax




-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax


Re: Location of last error details seen in storm UI

2014-08-25 Thread Harsha
current version of storm doesn't have a way to define storm log
dir. One way to do is to edit logback/cluster.xml under storm
installation.  Upcoming release will have a config option
storm.log.dir to redirect the logs from default dir.

-Harsha





On Mon, Aug 25, 2014, at 08:16 AM, Jason Kania wrote:

Thanks for that. I looked to find which property or
configuration parameter sets it but could not find it. Is there
such a parameter?

Thanks,

Jason
  __

From: Harsha st...@harsha.io
To: user@storm.incubator.apache.org
Sent: Monday, August 25, 2014 11:10:17 AM
Subject: Re: Location of last error details seen in storm UI

Jason,
   Default is under your storm installation check for logs
dir.
-Harsha




On Mon, Aug 25, 2014, at 07:54 AM, Jason Kania wrote:

Thanks for the response.

Unfortunately, I have no /var/log/storm on my system. Where is
the path to these logs specified. I am guessing it is pointing
somewhere else by default.

Thanks,

Jason
  __

From: Vikas Agarwal vi...@infoobjects.com
To: user@storm.incubator.apache.org
Cc: Jason Kania jason.ka...@ymail.com
Sent: Monday, August 25, 2014 10:34:00 AM
Subject: Re: Location of last error details seen in storm UI

Better would be to view log files under /var/log/storm. Any
issue with worker would be logged into
/var/log/storm/worker-6700.log
and /var/log/storm/worker-6701.log.




On Mon, Aug 25, 2014 at 8:00 PM, Vincent Russell
[1]vincent.russ...@gmail.com wrote:

Click on the link of the bolt/spout that is all the way on the
left side.



On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania
[2]jason.ka...@ymail.com wrote:

Hello,

I am trying to get more detail on an error that is being
displayed in the Storm UI under the Last Error column but
unfortunately, I am not seeing it captured anywhere else. Does
anyone know where this text could be seen? The problem is that
the error text is insufficient to diagnose the problem.

Thanks,

Jason





--
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
[3]http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

References

1. mailto:vincent.russ...@gmail.com
2. mailto:jason.ka...@ymail.com
3. http://www.infoobjects.com/


Re: Location of last error details seen in storm UI

2014-08-25 Thread Vikas Agarwal
I have used HortonWorks distribution for Storm and other Hadoop ecosystem
and it has property for setting storm.log.dir. It is using storm 0.9.1.


On Mon, Aug 25, 2014 at 9:02 PM, Harsha st...@harsha.io wrote:

  current version of storm doesn't have a way to define storm log dir. One
 way to do is to edit logback/cluster.xml under storm installation.
  Upcoming release will have a config option storm.log.dir to redirect the
 logs from default dir.
 -Harsha


 On Mon, Aug 25, 2014, at 08:16 AM, Jason Kania wrote:

 Thanks for that. I looked to find which property or configuration
 parameter sets it but could not find it. Is there such a parameter?

 Thanks,

 Jason

 --
 *From:* Harsha st...@harsha.io
  *To:* user@storm.incubator.apache.org
  *Sent:* Monday, August 25, 2014 11:10:17 AM
  *Subject:* Re: Location of last error details seen in storm UI

 Jason,
Default is under your storm installation check for logs dir.
 -Harsha




 On Mon, Aug 25, 2014, at 07:54 AM, Jason Kania wrote:

 Thanks for the response.

 Unfortunately, I have no /var/log/storm on my system. Where is the path to
 these logs specified. I am guessing it is pointing somewhere else by
 default.

 Thanks,

 Jason

 --
 *From:* Vikas Agarwal vi...@infoobjects.com
 *To:* user@storm.incubator.apache.org
 *Cc:* Jason Kania jason.ka...@ymail.com
 *Sent:* Monday, August 25, 2014 10:34:00 AM
 *Subject:* Re: Location of last error details seen in storm UI

 Better would be to view log files under /var/log/storm. Any issue with
 worker would be logged into /var/log/storm/worker-6700.log
 and /var/log/storm/worker-6701.log.




 On Mon, Aug 25, 2014 at 8:00 PM, Vincent Russell 
 vincent.russ...@gmail.com wrote:

 Click on the link of the bolt/spout that is all the way on the left side.


 On Sun, Aug 24, 2014 at 11:19 PM, Jason Kania jason.ka...@ymail.com
 wrote:

 Hello,

 I am trying to get more detail on an error that is being displayed in the
 Storm UI under the Last Error column but unfortunately, I am not seeing it
 captured anywhere else. Does anyone know where this text could be seen? The
 problem is that the error text is insufficient to diagnose the problem.

 Thanks,

 Jason






 --
 Regards,
 Vikas Agarwal
 91 – 9928301411

 InfoObjects, Inc.
 Execution Matters
 http://www.infoobjects.com
 2041 Mission College Boulevard, #280
 Santa Clara, CA 95054
 +1 (408) 988-2000 Work
 +1 (408) 716-2726 Fax












-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax


Preventing storm from caching classes

2014-08-25 Thread Jason Kania
Hello,

I would like to know if anybody knows how to prevent Storm from caching a 
previous version of the classes making up a topology. I am hitting problems 
where Storm is running old versions of my classes instead of the ones in the 
currently supplied jar file. I am having to include version numbers in the 
names of my classes to get the desired code to run.

Thanks,

Jason


Re: Preventing storm from caching classes

2014-08-25 Thread Vikas Agarwal
Are you killing the topology before uploading new one?


On Mon, Aug 25, 2014 at 9:26 PM, Jason Kania jason.ka...@ymail.com wrote:

 Hello,

 I would like to know if anybody knows how to prevent Storm from caching a
 previous version of the classes making up a topology. I am hitting problems
 where Storm is running old versions of my classes instead of the ones in
 the currently supplied jar file. I am having to include version numbers in
 the names of my classes to get the desired code to run.

 Thanks,

 Jason




-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax


Re: Preventing storm from caching classes

2014-08-25 Thread Tom Brown
That sounds like a very odd issue, as each storm worker runs within an
independent JVM. Are you putting all your classes in a single jar file
(that you upload to storm)? Is there any chance your build system is not
including the newest versions? Can you open the jar file and verify that
the correct versions are there?

--Tom


On Mon, Aug 25, 2014 at 10:02 AM, Vikas Agarwal vi...@infoobjects.com
wrote:

 Are you killing the topology before uploading new one?


 On Mon, Aug 25, 2014 at 9:26 PM, Jason Kania jason.ka...@ymail.com
 wrote:

 Hello,

 I would like to know if anybody knows how to prevent Storm from caching a
 previous version of the classes making up a topology. I am hitting problems
 where Storm is running old versions of my classes instead of the ones in
 the currently supplied jar file. I am having to include version numbers in
 the names of my classes to get the desired code to run.

 Thanks,

 Jason




 --
 Regards,
 Vikas Agarwal
 91 – 9928301411

 InfoObjects, Inc.
 Execution Matters
 http://www.infoobjects.com
 2041 Mission College Boulevard, #280
 Santa Clara, CA 95054
 +1 (408) 988-2000 Work
 +1 (408) 716-2726 Fax




Re: Preventing storm from caching classes

2014-08-25 Thread Jason Kania
Tom,

All the classes are in a single jar and I know the build system is including 
the newest versions only. I can see this because I have had to keep renaming 
the classes each time I upload and the only things I see in the jar are my 
latest classes. Unfortunately, it is old classes that are not in the jar that 
storm would attempt to run without the renaming. I am running with the 1.7  
update 67 jdk.

Thanks,


Jason




 From: Tom Brown tombrow...@gmail.com
To: user@storm.incubator.apache.org user@storm.incubator.apache.org 
Sent: Monday, August 25, 2014 12:27:03 PM
Subject: Re: Preventing storm from caching classes
 


That sounds like a very odd issue, as each storm worker runs within an 
independent JVM. Are you putting all your classes in a single jar file (that 
you upload to storm)? Is there any chance your build system is not including 
the newest versions? Can you open the jar file and verify that the correct 
versions are there?

--Tom





On Mon, Aug 25, 2014 at 10:02 AM, Vikas Agarwal vi...@infoobjects.com wrote:

Are you killing the topology before uploading new one?



On Mon, Aug 25, 2014 at 9:26 PM, Jason Kania jason.ka...@ymail.com wrote:

Hello,


I would like to know if anybody knows how to prevent Storm from caching a 
previous version of the classes making up a topology. I am hitting problems 
where Storm is running old versions of my classes instead of the ones in the 
currently supplied jar file. I am having to include version numbers in the 
names of my classes to get the desired code to run.


Thanks,


Jason



-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc. 
Execution Matters
http://www.infoobjects.com 
2041 Mission College Boulevard, #280 
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

Re: Preventing storm from caching classes

2014-08-25 Thread Jason Kania
I am killing the topology. I can even kill storm and the result remains the 
same once storm is relaunched.


Jason




 From: Vikas Agarwal vi...@infoobjects.com
To: user@storm.incubator.apache.org; Jason Kania jason.ka...@ymail.com 
Sent: Monday, August 25, 2014 12:02:25 PM
Subject: Re: Preventing storm from caching classes
 


Are you killing the topology before uploading new one?





On Mon, Aug 25, 2014 at 9:26 PM, Jason Kania jason.ka...@ymail.com wrote:

Hello,


I would like to know if anybody knows how to prevent Storm from caching a 
previous version of the classes making up a topology. I am hitting problems 
where Storm is running old versions of my classes instead of the ones in the 
currently supplied jar file. I am having to include version numbers in the 
names of my classes to get the desired code to run.


Thanks,


Jason


-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc. 
Execution Matters
http://www.infoobjects.com 
2041 Mission College Boulevard, #280 
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax 

Determining Topology Ready State for Continuous Deployment/Integration

2014-08-25 Thread Aaron Levin
Hey,

I'm curious if there's a way to determine when a newly-deployed Topology is
ready.

I'm trying to tighten up our continuous deployment pipeline during
integration tests. I've got an simple end-to-end test to ensure a certain
message is delivered through the cluster, and it would be nice to know when
a topology is 'ready' so I can start queuing up messages.

By ready I mean something along the lines of:

- topology code has been deployed throughout the cluster
- all jvm processes have started
- (potentially): storm's internal acker has received one ack from each
component.

I've looked at the Nimbus API and it's not clear how much info I can glean
from this.

Any help is appreciated. Thanks!

Best,

Aaron Levin
-- 
Aaron Levin
Data Scientist
demeure.com


Question on failing ack

2014-08-25 Thread Kushan Maskey
I have set up topology to load a very large volume of data. Recently I just
loaded about 60K records and found out that there are some failed acks on
few spouts but non on the bolts. Storm completed running and seem to look
stable. Initially i started with a lesser amount of data like about 500
records  successfully and then increased up to 60K where i saw the failed
acks.

Questions:
1. Does that mean that the spout was not able to read some messages from
Kafka? Since there are no failed ack on the bolts as per UI, what ever the
message received has been successfully processed by the bolts.
2. how do i interpret the numbers of failed acks like this acked:315500
 and failed: 2980.
Does this mean that 2980 records failed to be processed? Is this is the
case then, how do I avoid this from happening because I will be loosing
2980 records.
3. I also see that few of the records failed to be inserted into Cassandra
database. What is the best way to reprocess the data again as it is quite
difficult to do it through the batch process that I am currently running.

LMK, thanks.

--
Kushan Maskey
817.403.7500


Re: Question on failing ack

2014-08-25 Thread Srinath C
I would suspect that at some point the rate at which the spouts emitted
exceeded the rate at which the bolts could process. Maybe you could look at
configuring the buffers (if you haven't yet done that). Do your records get
processed at a constant rate?


On Tue, Aug 26, 2014 at 4:12 AM, Kushan Maskey 
kushan.mas...@mmillerassociates.com wrote:

 I have set up topology to load a very large volume of data. Recently I
 just loaded about 60K records and found out that there are some failed acks
 on few spouts but non on the bolts. Storm completed running and seem to
 look stable. Initially i started with a lesser amount of data like about
 500 records  successfully and then increased up to 60K where i saw the
 failed acks.

 Questions:
 1. Does that mean that the spout was not able to read some messages from
 Kafka? Since there are no failed ack on the bolts as per UI, what ever the
 message received has been successfully processed by the bolts.
 2. how do i interpret the numbers of failed acks like this acked:315500
  and failed: 2980.
 Does this mean that 2980 records failed to be processed? Is this is the
 case then, how do I avoid this from happening because I will be loosing
 2980 records.
 3. I also see that few of the records failed to be inserted into Cassandra
 database. What is the best way to reprocess the data again as it is quite
 difficult to do it through the batch process that I am currently running.

 LMK, thanks.

 --
 Kushan Maskey
 817.403.7500



Re: Question on failing ack

2014-08-25 Thread Michael Rose
Hi Kushan,

Depending on the Kafka spout you're using, it could be doing different
things when it failed. However, if it's running reliably, the Cassandra
insertion failures would have forced a replay from the spout until they had
completed.

Michael Rose (@Xorlev https://twitter.com/xorlev)
Senior Platform Engineer, FullContact http://www.fullcontact.com/
mich...@fullcontact.com


On Mon, Aug 25, 2014 at 4:42 PM, Kushan Maskey 
kushan.mas...@mmillerassociates.com wrote:

 I have set up topology to load a very large volume of data. Recently I
 just loaded about 60K records and found out that there are some failed acks
 on few spouts but non on the bolts. Storm completed running and seem to
 look stable. Initially i started with a lesser amount of data like about
 500 records  successfully and then increased up to 60K where i saw the
 failed acks.

 Questions:
 1. Does that mean that the spout was not able to read some messages from
 Kafka? Since there are no failed ack on the bolts as per UI, what ever the
 message received has been successfully processed by the bolts.
 2. how do i interpret the numbers of failed acks like this acked:315500
  and failed: 2980.
 Does this mean that 2980 records failed to be processed? Is this is the
 case then, how do I avoid this from happening because I will be loosing
 2980 records.
 3. I also see that few of the records failed to be inserted into Cassandra
 database. What is the best way to reprocess the data again as it is quite
 difficult to do it through the batch process that I am currently running.

 LMK, thanks.

 --
 Kushan Maskey
 817.403.7500



Re: Preventing storm from caching classes

2014-08-25 Thread Vikas Agarwal
You can check if kill operation is dumping any error in nimbus.log. It
might be the case that it is unable to delete the jar somehow. Further, are
you always seeing the immediate previous version of your classes on
deployment of new one OR it is quite old version of the same? In later
case, it might be due to some previous jar being present in storm lib or
somewhere else from where it is coming into the classpath.


On Mon, Aug 25, 2014 at 11:45 PM, Jason Kania jason.ka...@ymail.com wrote:

 I am killing the topology. I can even kill storm and the result remains
 the same once storm is relaunched.

 Jason

   --
  *From:* Vikas Agarwal vi...@infoobjects.com
 *To:* user@storm.incubator.apache.org; Jason Kania jason.ka...@ymail.com

 *Sent:* Monday, August 25, 2014 12:02:25 PM

 *Subject:* Re: Preventing storm from caching classes

 Are you killing the topology before uploading new one?




 On Mon, Aug 25, 2014 at 9:26 PM, Jason Kania jason.ka...@ymail.com
 wrote:

 Hello,

 I would like to know if anybody knows how to prevent Storm from caching a
 previous version of the classes making up a topology. I am hitting problems
 where Storm is running old versions of my classes instead of the ones in
 the currently supplied jar file. I am having to include version numbers in
 the names of my classes to get the desired code to run.

 Thanks,

 Jason




 --
 Regards,
 Vikas Agarwal
 91 – 9928301411

 InfoObjects, Inc.
 Execution Matters
 http://www.infoobjects.com
 2041 Mission College Boulevard, #280
 Santa Clara, CA 95054
 +1 (408) 988-2000 Work
 +1 (408) 716-2726 Fax





-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax


Re: Preventing storm from caching classes

2014-08-25 Thread Jason Kania
Thanks for the response.


I checked and didn't see any errors in the nimbus log. It is hard to tell 
exactly, but it seems that the classes being used vary over time and are one or 
more versions behind the current ones. To me it makes sense that it is 
something sneaking into the classpath, but as of yet, I do not know how.




 From: Vikas Agarwal vi...@infoobjects.com
To: user@storm.incubator.apache.org; Jason Kania jason.ka...@ymail.com 
Sent: Monday, August 25, 2014 10:57:19 PM
Subject: Re: Preventing storm from caching classes
 


You can check if kill operation is dumping any error in nimbus.log. It might be 
the case that it is unable to delete the jar somehow. Further, are you always 
seeing the immediate previous version of your classes on deployment of new one 
OR it is quite old version of the same? In later case, it might be due to some 
previous jar being present in storm lib or somewhere else from where it is 
coming into the classpath.





On Mon, Aug 25, 2014 at 11:45 PM, Jason Kania jason.ka...@ymail.com wrote:

I am killing the topology. I can even kill storm and the result remains the 
same once storm is relaunched.



Jason





 From: Vikas Agarwal vi...@infoobjects.com
To: user@storm.incubator.apache.org; Jason Kania jason.ka...@ymail.com 
Sent: Monday, August 25, 2014 12:02:25 PM

Subject: Re: Preventing storm from caching classes



Are you killing the topology before uploading new one?






On Mon, Aug 25, 2014 at 9:26 PM, Jason Kania jason.ka...@ymail.com wrote:

Hello,


I would like to know if anybody knows how to prevent Storm from caching a 
previous version of the classes making up a topology. I am hitting problems 
where Storm is running old versions of my classes instead of the ones in the 
currently supplied jar file. I am having to include version numbers in the 
names of my classes to get the desired code to run.


Thanks,


Jason



-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc. 
Execution Matters
http://www.infoobjects.com 
2041 Mission College Boulevard, #280 
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax 




-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc. 
Execution Matters
http://www.infoobjects.com 
2041 Mission College Boulevard, #280 
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax 

Re: Preventing storm from caching classes

2014-08-25 Thread Vikas Agarwal
If you are running topology with debug true, I guess there would be the
command used to launch the workers in supervisor.log. Check it, it may
contain the clue to the wrong classpath. You can try running the that
command directly from console to further debug the issue.


On Tue, Aug 26, 2014 at 9:00 AM, Jason Kania jason.ka...@ymail.com wrote:

 Thanks for the response.

 I checked and didn't see any errors in the nimbus log. It is hard to tell
 exactly, but it seems that the classes being used vary over time and are
 one or more versions behind the current ones. To me it makes sense that it
 is something sneaking into the classpath, but as of yet, I do not know how.

   --
  *From:* Vikas Agarwal vi...@infoobjects.com
 *To:* user@storm.incubator.apache.org; Jason Kania jason.ka...@ymail.com

 *Sent:* Monday, August 25, 2014 10:57:19 PM

 *Subject:* Re: Preventing storm from caching classes

 You can check if kill operation is dumping any error in nimbus.log. It
 might be the case that it is unable to delete the jar somehow. Further, are
 you always seeing the immediate previous version of your classes on
 deployment of new one OR it is quite old version of the same? In later
 case, it might be due to some previous jar being present in storm lib or
 somewhere else from where it is coming into the classpath.




 On Mon, Aug 25, 2014 at 11:45 PM, Jason Kania jason.ka...@ymail.com
 wrote:

 I am killing the topology. I can even kill storm and the result remains
 the same once storm is relaunched.

 Jason

   --
  *From:* Vikas Agarwal vi...@infoobjects.com
 *To:* user@storm.incubator.apache.org; Jason Kania jason.ka...@ymail.com

 *Sent:* Monday, August 25, 2014 12:02:25 PM

 *Subject:* Re: Preventing storm from caching classes

 Are you killing the topology before uploading new one?




 On Mon, Aug 25, 2014 at 9:26 PM, Jason Kania jason.ka...@ymail.com
 wrote:

 Hello,

 I would like to know if anybody knows how to prevent Storm from caching a
 previous version of the classes making up a topology. I am hitting problems
 where Storm is running old versions of my classes instead of the ones in
 the currently supplied jar file. I am having to include version numbers in
 the names of my classes to get the desired code to run.

 Thanks,

 Jason




 --
 Regards,
 Vikas Agarwal
 91 – 9928301411

 InfoObjects, Inc.
 Execution Matters
 http://www.infoobjects.com
 2041 Mission College Boulevard, #280
 Santa Clara, CA 95054
 +1 (408) 988-2000 Work
 +1 (408) 716-2726 Fax





 --
 Regards,
 Vikas Agarwal
 91 – 9928301411

 InfoObjects, Inc.
 Execution Matters
 http://www.infoobjects.com
 2041 Mission College Boulevard, #280
 Santa Clara, CA 95054
 +1 (408) 988-2000 Work
 +1 (408) 716-2726 Fax





-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax