Re: Storm not processing topology without logs
I am getting following error when trying to run the command for worker directly on console Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread main-SendThread(hdp.ambari:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-12-bolt1 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-10-bolt2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-8-bolt3 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-spout Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(hdp.ambari:2181) As one of the possible bug situations, I looked for multiple netty jars as suggested in other mail thread, it didn't work. Can anyone help me out where should I look next to resolve the issue. On Tue, Aug 26, 2014 at 2:20 PM, Vikas Agarwal vi...@infoobjects.com wrote: However, now my topology is failing to start worker process again. :( This time is not showing me any good clue to resolve it. Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). So, it is not letting me move forward to see what actually the error is while running the worker. On Mon, Aug 25, 2014 at 9:00 PM, Vikas Agarwal vi...@infoobjects.com wrote: Yes, I was able to see the topology in Storm UI and nothing was logged into worker logs. However, as I mentioned, I am able to resolve it by finding an hint in supervisor.log file this time. On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham itsmegeo...@gmail.com wrote: Are you able to see the topology in storm UI or with storm list command ?? And worker mentioned in the UI doesn't have any log ?? -- From: Vikas Agarwal Sent: 25-08-2014 PM 05:25 To: user@storm.incubator.apache.org Subject: Storm not processing topology without logs Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads with conf {topology.max.task.parallelism nil, topology.acker.executors nil, topology.kryo.register nil, topology.kryo.decorators (), topology.name aleads, storm.id aleads-3-1408964869, modelId ut, topology.workers 1, topology.debug true} 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads: aleads-3-1408964869 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots: ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701] [e56c2cc7-d35a-4355-9906-506618ff70c5 6700]) 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology id aleads-3-1408964869: #backtype.storm.daemon.common.Assignment{:master-code-dir /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor
Re: Storm not processing topology without logs
Vikas, Are you able to get past this error Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). Did you check if there are any processes running on that port. -Harsha On Thu, Aug 28, 2014, at 01:58 AM, Vikas Agarwal wrote: I am getting following error when trying to run the command for worker directly on console Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread main-SendThread(hdp.ambari:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-12-bolt1 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-10-bolt2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-8-bolt3 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-spout Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(hdp.ambari:2181) As one of the possible bug situations, I looked for multiple netty jars as suggested in other mail thread, it didn't work. Can anyone help me out where should I look next to resolve the issue. On Tue, Aug 26, 2014 at 2:20 PM, Vikas Agarwal [1]vi...@infoobjects.com wrote: However, now my topology is failing to start worker process again. :( This time is not showing me any good clue to resolve it. Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). So, it is not letting me move forward to see what actually the error is while running the worker. On Mon, Aug 25, 2014 at 9:00 PM, Vikas Agarwal [2]vi...@infoobjects.com wrote: Yes, I was able to see the topology in Storm UI and nothing was logged into worker logs. However, as I mentioned, I am able to resolve it by finding an hint in supervisor.log file this time. On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham [3]itsmegeo...@gmail.com wrote: Are you able to see the topology in storm UI or with storm list command ?? And worker mentioned in the UI doesn't have any log ?? __ From: Vikas Agarwal Sent: 25-08-2014 PM 05:25 To: [4]user@storm.incubator.apache.org Subject: Storm not processing topology without logs Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe1 3c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe1 3c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads with conf {topology.max.task.parallelism nil, topology.acker.executors nil, topology.kryo.register nil, topology.kryo.decorators (), [5]topology.name aleads, [6]storm.id aleads-3-1408964869, modelId ut, topology.workers 1, topology.debug true} 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads: aleads-3-1408964869 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots: ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701] [e56c2cc7-d35a-4355-9906-506618ff70c5 6700]) 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology id aleads-3-1408964869: #backtype.storm.daemon.common.Assignment{:master-code-dir /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5] [e56c2cc7-d35a-4355-9906
Re: Storm not processing topology without logs
Yes, I am through it. I have killed the processes created by main supervisor processes for 6700 and 6701 ports and then started process for one of these ports. After that I faced issues due to multiple versions of same library in storm lib e.g. netty and servlet-api After that I faced this stack overflow issue. Now, I am even able to fix it. Multiple slf4j-log4j implementations was the issue behind stack overflow. Now, I am back to the same state where the process just don't start. Now running the worker command manually is even not showing any log except this: JMXetricAgent instrumented JVM, see https://github.com/ganglia/jmxetric Aug 28, 2014 10:28:39 AM info.ganglia.gmetric4j.GMonitor start INFO: Setting up 1 samplers And then process get killed. On Thu, Aug 28, 2014 at 7:22 PM, Harsha st...@harsha.io wrote: Vikas, Are you able to get past this error Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). Did you check if there are any processes running on that port. -Harsha On Thu, Aug 28, 2014, at 01:58 AM, Vikas Agarwal wrote: I am getting following error when trying to run the command for worker directly on console Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread main-SendThread(hdp.ambari:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-12-bolt1 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-10-bolt2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-8-bolt3 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-spout Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(hdp.ambari:2181) As one of the possible bug situations, I looked for multiple netty jars as suggested in other mail thread, it didn't work. Can anyone help me out where should I look next to resolve the issue. On Tue, Aug 26, 2014 at 2:20 PM, Vikas Agarwal vi...@infoobjects.com wrote: However, now my topology is failing to start worker process again. :( This time is not showing me any good clue to resolve it. Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). So, it is not letting me move forward to see what actually the error is while running the worker. On Mon, Aug 25, 2014 at 9:00 PM, Vikas Agarwal vi...@infoobjects.com wrote: Yes, I was able to see the topology in Storm UI and nothing was logged into worker logs. However, as I mentioned, I am able to resolve it by finding an hint in supervisor.log file this time. On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham itsmegeo...@gmail.com wrote: Are you able to see the topology in storm UI or with storm list command ?? And worker mentioned in the UI doesn't have any log ?? -- *From: *Vikas Agarwal *Sent: *25-08-2014 PM 05:25 *To: *user@storm.incubator.apache.org *Subject: *Storm not processing topology without logs Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads with conf {topology.max.task.parallelism nil, topology.acker.executors nil
Re: Storm not processing topology without logs
If possible can you post some logs from supervisor.log. Interested in looking at the log when your supervisor starts. -Harsha On Thu, Aug 28, 2014, at 07:29 AM, Vikas Agarwal wrote: Yes, I am through it. I have killed the processes created by main supervisor processes for 6700 and 6701 ports and then started process for one of these ports. After that I faced issues due to multiple versions of same library in storm lib e.g. netty and servlet-api After that I faced this stack overflow issue. Now, I am even able to fix it. Multiple slf4j-log4j implementations was the issue behind stack overflow. Now, I am back to the same state where the process just don't start. Now running the worker command manually is even not showing any log except this: JMXetricAgent instrumented JVM, see [1]https://github.com/ganglia/jmxetric Aug 28, 2014 10:28:39 AM info.ganglia.gmetric4j.GMonitor start INFO: Setting up 1 samplers And then process get killed. On Thu, Aug 28, 2014 at 7:22 PM, Harsha [2]st...@harsha.io wrote: Vikas, Are you able to get past this error Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). Did you check if there are any processes running on that port. -Harsha On Thu, Aug 28, 2014, at 01:58 AM, Vikas Agarwal wrote: I am getting following error when trying to run the command for worker directly on console Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread main-SendThread(hdp.ambari:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-12-bolt1 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-10-bolt2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-8-bolt3 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-spout Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(hdp.ambari:2181) As one of the possible bug situations, I looked for multiple netty jars as suggested in other mail thread, it didn't work. Can anyone help me out where should I look next to resolve the issue. On Tue, Aug 26, 2014 at 2:20 PM, Vikas Agarwal [3]vi...@infoobjects.com wrote: However, now my topology is failing to start worker process again. :( This time is not showing me any good clue to resolve it. Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). So, it is not letting me move forward to see what actually the error is while running the worker. On Mon, Aug 25, 2014 at 9:00 PM, Vikas Agarwal [4]vi...@infoobjects.com wrote: Yes, I was able to see the topology in Storm UI and nothing was logged into worker logs. However, as I mentioned, I am able to resolve it by finding an hint in supervisor.log file this time. On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham [5]itsmegeo...@gmail.com wrote: Are you able to see the topology in storm UI or with storm list command ?? And worker mentioned in the UI doesn't have any log ?? __ From: Vikas Agarwal Sent: 25-08-2014 PM 05:25 To: [6]user@storm.incubator.apache.org Subject: Storm not processing topology without logs Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe1 3c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe1 3c706b2ab.jar 2014-08-25 07:07
Re: Storm not processing topology without logs
JMXetricAgent instrumented JVM, see https://github.com/ganglia/jmxetric Aug 28, 2014 10:28:39 AM info.ganglia.gmetric4j.GMonitor start INFO: Setting up 1 samplers This is the only log now when I start it manually and supervisor is still saying the same still hasn't started nothing more than that. It seems to me that it is some issue of inconsistent state because of the past errors. So, I am restarting the machine it self to check if it works after that. On Thu, Aug 28, 2014 at 8:27 PM, Harsha st...@harsha.io wrote: If possible can you post some logs from supervisor.log. Interested in looking at the log when your supervisor starts. -Harsha On Thu, Aug 28, 2014, at 07:29 AM, Vikas Agarwal wrote: Yes, I am through it. I have killed the processes created by main supervisor processes for 6700 and 6701 ports and then started process for one of these ports. After that I faced issues due to multiple versions of same library in storm lib e.g. netty and servlet-api After that I faced this stack overflow issue. Now, I am even able to fix it. Multiple slf4j-log4j implementations was the issue behind stack overflow. Now, I am back to the same state where the process just don't start. Now running the worker command manually is even not showing any log except this: JMXetricAgent instrumented JVM, see https://github.com/ganglia/jmxetric Aug 28, 2014 10:28:39 AM info.ganglia.gmetric4j.GMonitor start INFO: Setting up 1 samplers And then process get killed. On Thu, Aug 28, 2014 at 7:22 PM, Harsha st...@harsha.io wrote: Vikas, Are you able to get past this error Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). Did you check if there are any processes running on that port. -Harsha On Thu, Aug 28, 2014, at 01:58 AM, Vikas Agarwal wrote: I am getting following error when trying to run the command for worker directly on console Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread main-SendThread(hdp.ambari:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-12-bolt1 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-10-bolt2 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-8-bolt3 Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-spout Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(localhost:2181) Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread Thread-14-feed-stream-SendThread(hdp.ambari:2181) As one of the possible bug situations, I looked for multiple netty jars as suggested in other mail thread, it didn't work. Can anyone help me out where should I look next to resolve the issue. On Tue, Aug 26, 2014 at 2:20 PM, Vikas Agarwal vi...@infoobjects.com wrote: However, now my topology is failing to start worker process again. :( This time is not showing me any good clue to resolve it. Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). So, it is not letting me move forward to see what actually the error is while running the worker. On Mon, Aug 25, 2014 at 9:00 PM, Vikas Agarwal vi...@infoobjects.com wrote: Yes, I was able to see the topology in Storm UI and nothing was logged into worker logs. However, as I mentioned, I am able to resolve it by finding an hint in supervisor.log file this time. On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham itsmegeo...@gmail.com wrote: Are you able to see the topology in storm UI or with storm list command ?? And worker mentioned in the UI doesn't have any log ?? -- *From: *Vikas Agarwal *Sent: *25-08-2014 PM 05:25 *To: *user@storm.incubator.apache.org *Subject: *Storm not processing topology without logs Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually
Re: Storm not processing topology without logs
However, now my topology is failing to start worker process again. :( This time is not showing me any good clue to resolve it. Running the command manually on console causes Address already in use error for supervisor ports (6700,6701). So, it is not letting me move forward to see what actually the error is while running the worker. On Mon, Aug 25, 2014 at 9:00 PM, Vikas Agarwal vi...@infoobjects.com wrote: Yes, I was able to see the topology in Storm UI and nothing was logged into worker logs. However, as I mentioned, I am able to resolve it by finding an hint in supervisor.log file this time. On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham itsmegeo...@gmail.com wrote: Are you able to see the topology in storm UI or with storm list command ?? And worker mentioned in the UI doesn't have any log ?? -- From: Vikas Agarwal Sent: 25-08-2014 PM 05:25 To: user@storm.incubator.apache.org Subject: Storm not processing topology without logs Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads with conf {topology.max.task.parallelism nil, topology.acker.executors nil, topology.kryo.register nil, topology.kryo.decorators (), topology.name aleads, storm.id aleads-3-1408964869, modelId ut, topology.workers 1, topology.debug true} 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads: aleads-3-1408964869 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots: ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701] [e56c2cc7-d35a-4355-9906-506618ff70c5 6700]) 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology id aleads-3-1408964869: #backtype.storm.daemon.common.Assignment{:master-code-dir /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs {[1 1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6 6] 1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2] 1408964870}} Can anyone guess what I have done wrong and why Storm is not giving any error log anywhere. Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks) Kafka version is 2.10-0.8.1.1 Storm-Kafka version 0.9.2-incubating -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax
Storm not processing topology without logs
Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads with conf {topology.max.task.parallelism nil, topology.acker.executors nil, topology.kryo.register nil, topology.kryo.decorators (), topology.name aleads, storm.id aleads-3-1408964869, modelId ut, topology.workers 1, topology.debug true} 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads: aleads-3-1408964869 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots: ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701] [e56c2cc7-d35a-4355-9906-506618ff70c5 6700]) 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology id aleads-3-1408964869: #backtype.storm.daemon.common.Assignment{:master-code-dir /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs {[1 1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6 6] 1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2] 1408964870}} Can anyone guess what I have done wrong and why Storm is not giving any error log anywhere. Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks) Kafka version is 2.10-0.8.1.1 Storm-Kafka version 0.9.2-incubating -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax
Re: Storm not processing topology without logs
Found the fix. I was stuck in this problem from last 3-4 days and it was waiting just for joining the storm mailing list to be resolved. :) I found something in supervisor.log this time, it was dumping jar_UUID still hasn't started with the actual worker java command which is failing. However, it was not showing any error. So, I copied the command from the logs and directly run on console and it showed me the root cause. Somehow, localhost was get appended to hdp.ambari (which was my host name) and due to it was not able to find the server to run the command on. :( On Mon, Aug 25, 2014 at 5:25 PM, Vikas Agarwal vi...@infoobjects.com wrote: Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads with conf {topology.max.task.parallelism nil, topology.acker.executors nil, topology.kryo.register nil, topology.kryo.decorators (), topology.name aleads, storm.id aleads-3-1408964869, modelId ut, topology.workers 1, topology.debug true} 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads: aleads-3-1408964869 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots: ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701] [e56c2cc7-d35a-4355-9906-506618ff70c5 6700]) 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology id aleads-3-1408964869: #backtype.storm.daemon.common.Assignment{:master-code-dir /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs {[1 1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6 6] 1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2] 1408964870}} Can anyone guess what I have done wrong and why Storm is not giving any error log anywhere. Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks) Kafka version is 2.10-0.8.1.1 Storm-Kafka version 0.9.2-incubating -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax
RE: Storm not processing topology without logs
Are you able to see the topology in storm UI or with storm list command ?? And worker mentioned in the UI doesn't have any log ?? -Original Message- From: Vikas Agarwal Sent: 25-08-2014 PM 05:25 To: user@storm.incubator.apache.org Subject: Storm not processing topology without logs Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads with conf {topology.max.task.parallelism nil, topology.acker.executors nil, topology.kryo.register nil, topology.kryo.decorators (), topology.name aleads, storm.id aleads-3-1408964869, modelId ut, topology.workers 1, topology.debug true} 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads: aleads-3-1408964869 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots: ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701] [e56c2cc7-d35a-4355-9906-506618ff70c5 6700]) 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology id aleads-3-1408964869: #backtype.storm.daemon.common.Assignment{:master-code-dir /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs {[1 1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6 6] 1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2] 1408964870}} Can anyone guess what I have done wrong and why Storm is not giving any error log anywhere. Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks) Kafka version is 2.10-0.8.1.1 Storm-Kafka version 0.9.2-incubating -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax
Re: Storm not processing topology without logs
Yes, I was able to see the topology in Storm UI and nothing was logged into worker logs. However, as I mentioned, I am able to resolve it by finding an hint in supervisor.log file this time. On Mon, Aug 25, 2014 at 8:58 PM, Georgy Abraham itsmegeo...@gmail.com wrote: Are you able to see the topology in storm UI or with storm list command ?? And worker mentioned in the UI doesn't have any log ?? -- From: Vikas Agarwal Sent: 25-08-2014 PM 05:25 To: user@storm.incubator.apache.org Subject: Storm not processing topology without logs Hi, I have started to explore the Storm for distributed processing for our use case which we were earlier fulfilling by JMS based MQ system. Topology worked after some efforts. It has one spout (KafkaSpout from kafka-storm project) and 3 bolts. First bolt sets context for other two bolts which in turn do some processing on the tuples and persist the analyzed results in some DB (Mongo, Solr, HBase etc). Recently the topology stopped working. I am able to submit the topology and it does not throw any error in submitting the topology, however, nimbus.log or worker-6701.log files are not showing any progress and eventually topology does not consume any message. I don't have doubt on KafkaSpout because if it was the culprit, at least some initialization logs of spout and bolts should have been there in nimbus.log or worker-.log. Isn't it? Here is the snippet of nimbus.log after uploading the jar to cluster Uploading file from client to /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Finished uploading file from client: /hadoop/storm/nimbus/inbox/stormjar-31fe068b-337b-428f-8ae2-fe13c706b2ab.jar 2014-08-25 07:07:49 b.s.d.nimbus [INFO] Received topology submission for aleads with conf {topology.max.task.parallelism nil, topology.acker.executors nil, topology.kryo.register nil, topology.kryo.decorators (), topology.name aleads, storm.id aleads-3-1408964869, modelId ut, topology.workers 1, topology.debug true} 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Activating aleads: aleads-3-1408964869 2014-08-25 07:07:50 b.s.s.EvenScheduler [INFO] Available slots: ([e56c2cc7-d35a-4355-9906-506618ff70c5 6701] [e56c2cc7-d35a-4355-9906-506618ff70c5 6700]) 2014-08-25 07:07:50 b.s.d.nimbus [INFO] Setting new assignment for topology id aleads-3-1408964869: #backtype.storm.daemon.common.Assignment{:master-code-dir /hadoop/storm/nimbus/stormdist/aleads-3-1408964869, :node-host {e56c2cc7-d35a-4355-9906-506618ff70c5 hdp.ambari}, :executor-node+port {[2 2] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [3 3] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [4 4] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [5 5] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [6 6] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [7 7] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [8 8] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [9 9] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701], [1 1] [e56c2cc7-d35a-4355-9906-506618ff70c5 6701]}, :executor-start-time-secs {[1 1] 1408964870, [9 9] 1408964870, [8 8] 1408964870, [7 7] 1408964870, [6 6] 1408964870, [5 5] 1408964870, [4 4] 1408964870, [3 3] 1408964870, [2 2] 1408964870}} Can anyone guess what I have done wrong and why Storm is not giving any error log anywhere. Storm version is 0.9.1.2.1.3.0-563 (Installed via HortonWorks) Kafka version is 2.10-0.8.1.1 Storm-Kafka version 0.9.2-incubating -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax -- Regards, Vikas Agarwal 91 – 9928301411 InfoObjects, Inc. Execution Matters http://www.infoobjects.com 2041 Mission College Boulevard, #280 Santa Clara, CA 95054 +1 (408) 988-2000 Work +1 (408) 716-2726 Fax