java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/SOIMDB

2014-09-26 Thread Pavan Jakati G
Hi Team,

 

We are facing some weird issue when try running topology in remote mode
.For some reason apache storm is unable to find the class in the driver
. We have placed mysql-connector jar file under /lib directory of apache
storm and is getting identified when run storm classpath .

 

The code runs fine in local mode but fails in remote mode . Please help
us fix it . Thanks.

 

Code :

 

   Class.forName(com.mysql.jdbc.Driver);

System.out.println(SUCCESS);

String connectionUrl =
jdbc:mysql://192.168.10.7:3306/MYDB;

String connectionUser = root;

String connectionPassword = ;

SOIMConnection  =
DriverManager.getConnection(connectionUrl, connectionUser,
connectionPassword);

 

Error seen in worker file :

 

014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] EMITTING FROM
SRIDHAR3283529

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

 

Regards,

PaVan...

 

The information contained in this transmission may contain privileged and 
confidential information of Microland Limited, including information protected 
by privacy laws. It is intended only for the use of Microland Limited. If you 
are not the intended recipient, you are hereby notified that any review, 
dissemination, distribution, or duplication of this communication is strictly 
prohibited. If you are not the intended recipient, please contact the sender by 
reply email and destroy all copies of the original message. Although Microland 
has taken reasonable precautions to ensure no viruses are present in this 
email, Microland cannot accept responsibility for any loss or damage arising 
from the use of this email or attachments. Computer viruses can be transmitted 
via email. Recipient should check the email and any attachments for the 
presence of viruses before using them. Any views or opinions are solely those 
of the author and do not necessarily represent those of Microland. 



This email may be monitored.


RE: java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/SOIMDB

2014-09-26 Thread Pavan Jakati G

Hi Team

Restarting apache storm helped us here.thanks 

-Original Message-
From: Pavan Jakati G
Sent: Fri 9/26/2014 12:45 PM
To: user@storm.incubator.apache.org
Subject: java.sql.SQLException: No suitable driver found for 
jdbc:mysql://localhost:3306/SOIMDB
 
Hi Team,

 

We are facing some weird issue when try running topology in remote mode
.For some reason apache storm is unable to find the class in the driver
. We have placed mysql-connector jar file under /lib directory of apache
storm and is getting identified when run storm classpath .

 

The code runs fine in local mode but fails in remote mode . Please help
us fix it . Thanks.

 

Code :

 

   Class.forName(com.mysql.jdbc.Driver);

System.out.println(SUCCESS);

String connectionUrl =
jdbc:mysql://192.168.10.7:3306/MYDB;

String connectionUser = root;

String connectionPassword = ;

SOIMConnection  =
DriverManager.getConnection(connectionUrl, connectionUser,
connectionPassword);

 

Error seen in worker file :

 

014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

2014-09-26 12:41:13 STDIO [INFO] ERRORRR
HEREjava.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.10.7:3306/MYDB

2014-09-26 12:41:13 STDIO [INFO] EMITTING FROM
SRIDHAR3283529

2014-09-26 12:41:13 STDIO [INFO] SUCCESS

 

Regards,

PaVan...

 

The information contained in this transmission may contain privileged and 
confidential information of Microland Limited, including information protected 
by privacy laws. It is intended only for the use of Microland Limited. If you 
are not the intended recipient, you are hereby notified that any review, 
dissemination, distribution, or duplication of this communication is strictly 
prohibited. If you are not the intended recipient, please contact the sender by 
reply email and destroy all copies of the original message. Although Microland 
has taken reasonable precautions to ensure no viruses are present in this 
email, Microland cannot accept responsibility for any loss or damage arising 
from the use of this email or attachments. Computer viruses can be transmitted 
via email. Recipient should check the email and any attachments for the 
presence of viruses before using them. Any views or opinions are solely those 
of the author and do not necessarily represent those of Microland. 



This email may be monitored.



What's the best way to guarantee external delivery of messages with Storm

2014-09-26 Thread Peter Neumark
Hi all,

We want to replace a legacy custom app with storm, but -being storm
newbies- we're not sure what's the best way to solve the following problem:

An HTTP endpoint returns the list of events which occurred between two
timestamps. The task is to continuously poll this event source for new
events, optionally perform some transformation and aggregation operations
on them, and finally make an HTTP request to an endpoint with some events.

We thought of a simple topology:
1. A clock-spout determines which time interval to process.
2. A bolt takes the time interval as input, and fetches the event list for
that interval fro the event source, emitting them as individual tuples.
3. After some processing of the tuples, we aggregate them into fixed size
groups, which we send in HTTP requests to an event sink.

The big question is how to make sure that all events are successfully
delivered to the event sink. I know storm guarantees the delivery of tuples
within the topology, but how could I guarantee that the HTTP requests to
the event sink are also successful (and retried if necessary).

All help, suggestions and pointers welcome!
Peter

-- 

*Peter Neumark*
DevOps guy @Prezi http://prezi.com


Re: What's the best way to guarantee external delivery of messages with Storm

2014-09-26 Thread Supun Kamburugamuva
On Fri, Sep 26, 2014 at 10:49 AM, Peter Neumark peter.neum...@prezi.com
wrote:

 Hi all,

 We want to replace a legacy custom app with storm, but -being storm
 newbies- we're not sure what's the best way to solve the following problem:

 An HTTP endpoint returns the list of events which occurred between two
 timestamps. The task is to continuously poll this event source for new
 events, optionally perform some transformation and aggregation operations
 on them, and finally make an HTTP request to an endpoint with some events.

 We thought of a simple topology:
 1. A clock-spout determines which time interval to process.
 2. A bolt takes the time interval as input, and fetches the event list for
 that interval fro the event source, emitting them as individual tuples.
 3. After some processing of the tuples, we aggregate them into fixed size
 groups, which we send in HTTP requests to an event sink.

 The big question is how to make sure that all events are successfully
 delivered to the event sink. I know storm guarantees the delivery of tuples
 within the topology, but how could I guarantee that the HTTP requests to
 the event sink are also successful (and retried if necessary).


I think this is not a question about Storm and rather a question about how
to deliver a message reliably to some sink. From my experience it is bit
hard to achieve something like this with HTTP. This functionality is built
in to message brokers like RabbitMQ, ActiveMQ, Kafka etc and if you use a
broker to send your events to the sink you can get a delivery guarantee.

Thanks,
Supun..



 All help, suggestions and pointers welcome!
 Peter

 --

 *Peter Neumark*
 DevOps guy @Prezi http://prezi.com




-- 
Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: supu...@gmail.com;  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com


Re: What's the best way to guarantee external delivery of messages with Storm

2014-09-26 Thread Peter Neumark
Thanks for the quick response!
Unfortunately, we're forced to use HTTP.
Any ideas?

On Fri, Sep 26, 2014 at 5:07 PM, Supun Kamburugamuva supu...@gmail.com
wrote:

 On Fri, Sep 26, 2014 at 10:49 AM, Peter Neumark peter.neum...@prezi.com
 wrote:

 Hi all,

 We want to replace a legacy custom app with storm, but -being storm
 newbies- we're not sure what's the best way to solve the following problem:

 An HTTP endpoint returns the list of events which occurred between two
 timestamps. The task is to continuously poll this event source for new
 events, optionally perform some transformation and aggregation operations
 on them, and finally make an HTTP request to an endpoint with some events.

 We thought of a simple topology:
 1. A clock-spout determines which time interval to process.
 2. A bolt takes the time interval as input, and fetches the event list
 for that interval fro the event source, emitting them as individual tuples.
 3. After some processing of the tuples, we aggregate them into fixed size
 groups, which we send in HTTP requests to an event sink.

 The big question is how to make sure that all events are successfully
 delivered to the event sink. I know storm guarantees the delivery of tuples
 within the topology, but how could I guarantee that the HTTP requests to
 the event sink are also successful (and retried if necessary).


 I think this is not a question about Storm and rather a question about how
 to deliver a message reliably to some sink. From my experience it is bit
 hard to achieve something like this with HTTP. This functionality is built
 in to message brokers like RabbitMQ, ActiveMQ, Kafka etc and if you use a
 broker to send your events to the sink you can get a delivery guarantee.

 Thanks,
 Supun..



 All help, suggestions and pointers welcome!
 Peter

 --

 *Peter Neumark*
 DevOps guy @Prezi http://prezi.com




 --
 Supun Kamburugamuva
 Member, Apache Software Foundation; http://www.apache.org
 E-mail: supu...@gmail.com;  Mobile: +1 812 369 6762
 Blog: http://supunk.blogspot.com




-- 

*Peter Neumark*
DevOps guy @Prezi http://prezi.com


Re: What's the best way to guarantee external delivery of messages with Storm

2014-09-26 Thread Derek Dagit
Will the HTTP event sink respond with some acknowledgement that it 
received whatever was sent?


If so, could this be as simple as telling your bolt not to ack the tuple 
until this response is received from the HTTP service?


--
Derek

On 9/26/14 10:10, Peter Neumark wrote:

Thanks for the quick response!
Unfortunately, we're forced to use HTTP.
Any ideas?

On Fri, Sep 26, 2014 at 5:07 PM, Supun Kamburugamuva supu...@gmail.com
wrote:


On Fri, Sep 26, 2014 at 10:49 AM, Peter Neumark peter.neum...@prezi.com
wrote:


Hi all,

We want to replace a legacy custom app with storm, but -being storm
newbies- we're not sure what's the best way to solve the following problem:

An HTTP endpoint returns the list of events which occurred between two
timestamps. The task is to continuously poll this event source for new
events, optionally perform some transformation and aggregation operations
on them, and finally make an HTTP request to an endpoint with some events.

We thought of a simple topology:
1. A clock-spout determines which time interval to process.
2. A bolt takes the time interval as input, and fetches the event list
for that interval fro the event source, emitting them as individual tuples.
3. After some processing of the tuples, we aggregate them into fixed size
groups, which we send in HTTP requests to an event sink.

The big question is how to make sure that all events are successfully
delivered to the event sink. I know storm guarantees the delivery of tuples
within the topology, but how could I guarantee that the HTTP requests to
the event sink are also successful (and retried if necessary).



I think this is not a question about Storm and rather a question about how
to deliver a message reliably to some sink. From my experience it is bit
hard to achieve something like this with HTTP. This functionality is built
in to message brokers like RabbitMQ, ActiveMQ, Kafka etc and if you use a
broker to send your events to the sink you can get a delivery guarantee.

Thanks,
Supun..




All help, suggestions and pointers welcome!
Peter

--

*Peter Neumark*
DevOps guy @Prezi http://prezi.com





--
Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: supu...@gmail.com;  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com







Re: What's the best way to guarantee external delivery of messages with Storm

2014-09-26 Thread Supun Kamburugamuva
If we don't care about how many times the message is delivered (at least
once) then we can use some error handling in HTTP to achieve a guarantee.
You can use a request/response in HTTP and until you get a HTTP 200/202 you
can retry the delivery. To get a exactly once guarantees we may need to go
through some more complicated protocol.

On Fri, Sep 26, 2014 at 11:10 AM, Peter Neumark peter.neum...@prezi.com
wrote:

 Thanks for the quick response!
 Unfortunately, we're forced to use HTTP.
 Any ideas?

 On Fri, Sep 26, 2014 at 5:07 PM, Supun Kamburugamuva supu...@gmail.com
 wrote:

 On Fri, Sep 26, 2014 at 10:49 AM, Peter Neumark peter.neum...@prezi.com
 wrote:

 Hi all,

 We want to replace a legacy custom app with storm, but -being storm
 newbies- we're not sure what's the best way to solve the following problem:

 An HTTP endpoint returns the list of events which occurred between two
 timestamps. The task is to continuously poll this event source for new
 events, optionally perform some transformation and aggregation operations
 on them, and finally make an HTTP request to an endpoint with some events.

 We thought of a simple topology:
 1. A clock-spout determines which time interval to process.
 2. A bolt takes the time interval as input, and fetches the event list
 for that interval fro the event source, emitting them as individual tuples.
 3. After some processing of the tuples, we aggregate them into fixed
 size groups, which we send in HTTP requests to an event sink.

 The big question is how to make sure that all events are successfully
 delivered to the event sink. I know storm guarantees the delivery of tuples
 within the topology, but how could I guarantee that the HTTP requests to
 the event sink are also successful (and retried if necessary).


 I think this is not a question about Storm and rather a question about
 how to deliver a message reliably to some sink. From my experience it is
 bit hard to achieve something like this with HTTP. This functionality is
 built in to message brokers like RabbitMQ, ActiveMQ, Kafka etc and if you
 use a broker to send your events to the sink you can get a delivery
 guarantee.

 Thanks,
 Supun..



 All help, suggestions and pointers welcome!
 Peter

 --

 *Peter Neumark*
 DevOps guy @Prezi http://prezi.com




 --
 Supun Kamburugamuva
 Member, Apache Software Foundation; http://www.apache.org
 E-mail: supu...@gmail.com;  Mobile: +1 812 369 6762
 Blog: http://supunk.blogspot.com




 --

 *Peter Neumark*
 DevOps guy @Prezi http://prezi.com




-- 
Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: supu...@gmail.com;  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com


Re: LoggingMetricsConsumer

2014-09-26 Thread Raphael Hsieh
Thanks for your response John,
Could you explain to me how this would work when I am using Trident? Since
with Trident bolts are abstracted away from the user, how might I configure
my own MetricsConsumerBolt / debug it to figure out why it isn't calling
handleDataPoints() ? my metricsConsumer's prepare() and cleanup()
methods are called, but never the handleDataPoints() function.

Thanks

On Thu, Sep 25, 2014 at 1:33 PM, John Reilly j...@inconspicuous.org wrote:

 It is called by the MetricsConsumerBolt which is created by storm when a
 worker is starting up.  When you define a metrics consumer, you should see
 metrics output every 60 seconds.  Also, I think the metrics code was only
 introduced in 0.9.0 so you would need to be running at least that version.

 One other issue I ran into when registering a metrics consumer was that
 the config args I was passing initially because of serialization issues.
 When I used a Map instead of a serializable class that I created, it worked
 fine.  For the packaged LoggingMetricsConsumer there is no config though.
 I think I did run into an issue when trying to configure
 both LoggingMetricsConsumer and my own metrics consumer.  IIRC, if
 initialization of my consumer failed, the LoggingMetricsConsumer would also
 failit may have depended on the order that I was registering them in,
 but I don't remember exactly.

 Cheers,
 John

 On Thu, Sep 25, 2014 at 10:07 AM, Raphael Hsieh raffihs...@gmail.com
 wrote:

 Hi, I've been trying to figure out why registerinfg a
 LoggingMetricsConsumer isn't working for me.

 I've been able to figure out that it is indeed running, however the
 handleDataPoints() function is never called. Can someone explain to me
 how this class is used by Storm in order to log metrics?
 When is the handleDataPoints function called?

 Thanks

 --
 Raphael Hsieh







-- 
Raphael Hsieh


Re: What's the best way to guarantee external delivery of messages with Storm

2014-09-26 Thread Tyson Norris
Related to redelivery, it depends on the granularity of what you are 
comfortable replaying.

If the whole process can easily be replayed, you can allow the http 
timeout/failure to fail the tuple, and track that tuple in your spout for 
replaying later.

If you don’t want the whole process to replay, you really need to split out the 
http event sink handling to a separate spout+stream, e.g. buffered by a message 
broker. (or a separate topology completely). This is what we do, since we don’t 
care to have a tuple failure during an http update at the end cause a replay 
through the whole process. So we have 2 spouts, 2 streams - first spout does 
the internal processing and a few critical updates, then adds a message to a 
kafka topic. Second spout consumes the kafka topic just for sending the http 
requests, where failures will replay the tuples. We still need to do some 
tuning of this, for example, to build in some delay in the replay process.

Tyson

On Sep 26, 2014, at 8:20 AM, Supun Kamburugamuva 
supu...@gmail.commailto:supu...@gmail.com wrote:

If we don't care about how many times the message is delivered (at least once) 
then we can use some error handling in HTTP to achieve a guarantee. You can use 
a request/response in HTTP and until you get a HTTP 200/202 you can retry the 
delivery. To get a exactly once guarantees we may need to go through some more 
complicated protocol.

On Fri, Sep 26, 2014 at 11:10 AM, Peter Neumark 
peter.neum...@prezi.commailto:peter.neum...@prezi.com wrote:
Thanks for the quick response!
Unfortunately, we're forced to use HTTP.
Any ideas?

On Fri, Sep 26, 2014 at 5:07 PM, Supun Kamburugamuva 
supu...@gmail.commailto:supu...@gmail.com wrote:
On Fri, Sep 26, 2014 at 10:49 AM, Peter Neumark 
peter.neum...@prezi.commailto:peter.neum...@prezi.com wrote:
Hi all,

We want to replace a legacy custom app with storm, but -being storm newbies- 
we're not sure what's the best way to solve the following problem:

An HTTP endpoint returns the list of events which occurred between two 
timestamps. The task is to continuously poll this event source for new events, 
optionally perform some transformation and aggregation operations on them, and 
finally make an HTTP request to an endpoint with some events.

We thought of a simple topology:
1. A clock-spout determines which time interval to process.
2. A bolt takes the time interval as input, and fetches the event list for that 
interval fro the event source, emitting them as individual tuples.
3. After some processing of the tuples, we aggregate them into fixed size 
groups, which we send in HTTP requests to an event sink.

The big question is how to make sure that all events are successfully delivered 
to the event sink. I know storm guarantees the delivery of tuples within the 
topology, but how could I guarantee that the HTTP requests to the event sink 
are also successful (and retried if necessary).

I think this is not a question about Storm and rather a question about how to 
deliver a message reliably to some sink. From my experience it is bit hard to 
achieve something like this with HTTP. This functionality is built in to 
message brokers like RabbitMQ, ActiveMQ, Kafka etc and if you use a broker to 
send your events to the sink you can get a delivery guarantee.

Thanks,
Supun..


All help, suggestions and pointers welcome!
Peter

--

Peter Neumark
DevOps guy @Prezihttp://prezi.com/



--
Supun Kamburugamuva
Member, Apache Software Foundation; 
http://www.apache.orghttp://www.apache.org/
E-mail: supu...@gmail.commailto:supu...@gmail.com;  Mobile: +1 812 369 
6762tel:%2B1%20812%20369%206762
Blog: http://supunk.blogspot.comhttp://supunk.blogspot.com/




--

Peter Neumark
DevOps guy @Prezihttp://prezi.com/



--
Supun Kamburugamuva
Member, Apache Software Foundation; 
http://www.apache.orghttp://www.apache.org/
E-mail: supu...@gmail.commailto:supu...@gmail.com;  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.comhttp://supunk.blogspot.com/




possible memory leak in supervisor? storm 0.9.0

2014-09-26 Thread Babar Ismail
We are running storm 0.9.0 and we see a gradual increase in consumption of the 
supervisor process memory. It is growing approximately at 4KB every minute or 
so. Our machines show a gradual decrease in available memory over 7 days till 
the memory spikes back up and we see supervisors restart.

It doesn’t matter how much data the topologies are processing. Anywhere from 
200 rps to 2 rps show the same rate of memory leak.

It has been a recurring issue for us and was wondering if it is a known issue. 
If not, any ideas where to start looking into?

Thanks,
Babar


Re: LoggingMetricsConsumer

2014-09-26 Thread John Reilly
I have not used Trident, but I think the metrics should be handled the same
for trident bolts as normal rich bolts.  You can confirm that the metrics
bolt is there by looking in the storm ui.  Click on the topology and go
down to the bottom of the page and click on Show System Stats and it will
show you the metrics consumers under the list of bolts.  Executed, Acked,
Process Latency and Execute Latency are non-zero values for me for my 3
metrics consumers.

Hopefully that helps a little.  Even if you did not register any metrics
you should see the built in metrics for each bolt and spout so I'm not sure
why you don't see any (assuming you are using the default 60s interval for
the built in metrics).



On Fri, Sep 26, 2014 at 8:44 AM, Raphael Hsieh raffihs...@gmail.com wrote:

 Thanks for your response John,
 Could you explain to me how this would work when I am using Trident? Since
 with Trident bolts are abstracted away from the user, how might I configure
 my own MetricsConsumerBolt / debug it to figure out why it isn't calling
 handleDataPoints() ? my metricsConsumer's prepare() and cleanup()
 methods are called, but never the handleDataPoints() function.

 Thanks

 On Thu, Sep 25, 2014 at 1:33 PM, John Reilly j...@inconspicuous.org wrote:

 It is called by the MetricsConsumerBolt which is created by storm when a
 worker is starting up.  When you define a metrics consumer, you should see
 metrics output every 60 seconds.  Also, I think the metrics code was only
 introduced in 0.9.0 so you would need to be running at least that version.

 One other issue I ran into when registering a metrics consumer was that
 the config args I was passing initially because of serialization issues.
 When I used a Map instead of a serializable class that I created, it worked
 fine.  For the packaged LoggingMetricsConsumer there is no config though.
 I think I did run into an issue when trying to configure
 both LoggingMetricsConsumer and my own metrics consumer.  IIRC, if
 initialization of my consumer failed, the LoggingMetricsConsumer would also
 failit may have depended on the order that I was registering them in,
 but I don't remember exactly.

 Cheers,
 John

 On Thu, Sep 25, 2014 at 10:07 AM, Raphael Hsieh raffihs...@gmail.com
 wrote:

 Hi, I've been trying to figure out why registerinfg a
 LoggingMetricsConsumer isn't working for me.

 I've been able to figure out that it is indeed running, however the
 handleDataPoints() function is never called. Can someone explain to me
 how this class is used by Storm in order to log metrics?
 When is the handleDataPoints function called?

 Thanks

 --
 Raphael Hsieh







 --
 Raphael Hsieh





nette reconnects

2014-09-26 Thread Tyson Norris
Hi - 
We are seeing workers dying and restarting quite a bit, apparently from netty 
connection issues.

For example, the log below shows:
* Reconnect for worker at 121:6700
* connection established to 121:6700
* closing connection to 121:6700
* Reconnect started to 121:6700

all within 1 second.

We have netty config updated to:
storm.messaging.netty.max_retries: 30
storm.messaging.netty.max_wait_ms: 1
storm.messaging.netty.min_wait_ms: 1000

And the workers die pretty quickly because often 30 retries does not end up 
with a connection. 

Any suggestions for how to prevent netting from closing a connection 
immediately? I could not see any obvious reason in the code that this would 
happen.

Thanks
Tyson

2014-09-26 09:32:03 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6700... [5]
2014-09-26 09:32:04 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6701... [6]
2014-09-26 09:32:11 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.10.180:6701... [6]
2014-09-26 09:32:12 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.10.180:6702... [6]
2014-09-26 09:32:13 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6700... [6]
2014-09-26 09:32:14 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6701... [7]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6700... [7]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] connection established to a remote 
host Netty-Client-/10.27.13.121:6700, [id: 0xb8b33bef, /10.27.10.180:33880 = 
/10.27.13.121:6700]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Closing Netty Client 
Netty-Client-/10.27.13.121:6700
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Waiting for pending batchs to be sent 
with Netty-Client-/10.27.13.121:6700..., timeout: 60ms, pendings: 0
2014-09-26 09:32:19 b.s.m.n.Client [INFO] New Netty Client, connect to 
10.27.13.121, 6700, config: , buffer_size: 5242880
2014-09-26 09:32:19 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6700... [0]
2014-09-26 09:32:19 b.s.m.n.Client [INFO] connection established to a remote 
host Netty-Client-/10.27.13.121:6700, [id: 0x9dc224e6, /10.27.10.180:33881 = 
/10.27.13.121:6700]



Re: nette reconnects

2014-09-26 Thread Varun Vijayaraghavan
Hey,

I've been facing the same issues in my topologies. It seems like a crash in
a single worker would trigger a reconnect from other workers for x amount
of time (30 x 10s = ~300 seconds in your case) before crashing themselves -
thus leading to a catastrophic failure in the topology.

There is a patch in 0.9.3 related to exponential backoff for netty
connections - which may address the issue - but until then I did two things
- a) increase the max_wait_ms to 15000 and b) decrease
supervisor.worker.start.timeout.secs to 30 - so that workers restart
earlier.

On Fri, Sep 26, 2014 at 2:06 PM, Tyson Norris tnor...@adobe.com wrote:

 Hi -
 We are seeing workers dying and restarting quite a bit, apparently from
 netty connection issues.

 For example, the log below shows:
 * Reconnect for worker at 121:6700
 * connection established to 121:6700
 * closing connection to 121:6700
 * Reconnect started to 121:6700

 all within 1 second.

 We have netty config updated to:
 storm.messaging.netty.max_retries: 30
 storm.messaging.netty.max_wait_ms: 1
 storm.messaging.netty.min_wait_ms: 1000

 And the workers die pretty quickly because often 30 retries does not end
 up with a connection.

 Any suggestions for how to prevent netting from closing a connection
 immediately? I could not see any obvious reason in the code that this would
 happen.

 Thanks
 Tyson

 2014-09-26 09:32:03 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6700... [5]
 2014-09-26 09:32:04 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6701... [6]
 2014-09-26 09:32:11 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.10.180:6701... [6]
 2014-09-26 09:32:12 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.10.180:6702... [6]
 2014-09-26 09:32:13 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6700... [6]
 2014-09-26 09:32:14 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6701... [7]
 2014-09-26 09:32:18 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6700... [7]
 2014-09-26 09:32:18 b.s.m.n.Client [INFO] connection established to a
 remote host Netty-Client-/10.27.13.121:6700, [id: 0xb8b33bef, /
 10.27.10.180:33880 = /10.27.13.121:6700]
 2014-09-26 09:32:18 b.s.m.n.Client [INFO] Closing Netty Client
 Netty-Client-/10.27.13.121:6700
 2014-09-26 09:32:18 b.s.m.n.Client [INFO] Waiting for pending batchs to be
 sent with Netty-Client-/10.27.13.121:6700..., timeout: 60ms,
 pendings: 0
 2014-09-26 09:32:19 b.s.m.n.Client [INFO] New Netty Client, connect to
 10.27.13.121, 6700, config: , buffer_size: 5242880
 2014-09-26 09:32:19 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6700... [0]
 2014-09-26 09:32:19 b.s.m.n.Client [INFO] connection established to a
 remote host Netty-Client-/10.27.13.121:6700, [id: 0x9dc224e6, /
 10.27.10.180:33881 = /10.27.13.121:6700]




-- 
- varun :)


Re: nette reconnects

2014-09-26 Thread Varun Vijayaraghavan
I first tried increasing the max_retries to a much higher number (300) but
that did not make a difference.

On Fri, Sep 26, 2014 at 2:46 PM, Varun Vijayaraghavan varun@gmail.com
wrote:

 Hey,

 I've been facing the same issues in my topologies. It seems like a crash
 in a single worker would trigger a reconnect from other workers for x
 amount of time (30 x 10s = ~300 seconds in your case) before crashing
 themselves - thus leading to a catastrophic failure in the topology.

 There is a patch in 0.9.3 related to exponential backoff for netty
 connections - which may address the issue - but until then I did two things
 - a) increase the max_wait_ms to 15000 and b) decrease
 supervisor.worker.start.timeout.secs to 30 - so that workers restart
 earlier.

 On Fri, Sep 26, 2014 at 2:06 PM, Tyson Norris tnor...@adobe.com wrote:

 Hi -
 We are seeing workers dying and restarting quite a bit, apparently from
 netty connection issues.

 For example, the log below shows:
 * Reconnect for worker at 121:6700
 * connection established to 121:6700
 * closing connection to 121:6700
 * Reconnect started to 121:6700

 all within 1 second.

 We have netty config updated to:
 storm.messaging.netty.max_retries: 30
 storm.messaging.netty.max_wait_ms: 1
 storm.messaging.netty.min_wait_ms: 1000

 And the workers die pretty quickly because often 30 retries does not end
 up with a connection.

 Any suggestions for how to prevent netting from closing a connection
 immediately? I could not see any obvious reason in the code that this would
 happen.

 Thanks
 Tyson

 2014-09-26 09:32:03 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6700... [5]
 2014-09-26 09:32:04 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6701... [6]
 2014-09-26 09:32:11 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.10.180:6701... [6]
 2014-09-26 09:32:12 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.10.180:6702... [6]
 2014-09-26 09:32:13 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6700... [6]
 2014-09-26 09:32:14 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6701... [7]
 2014-09-26 09:32:18 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6700... [7]
 2014-09-26 09:32:18 b.s.m.n.Client [INFO] connection established to a
 remote host Netty-Client-/10.27.13.121:6700, [id: 0xb8b33bef, /
 10.27.10.180:33880 = /10.27.13.121:6700]
 2014-09-26 09:32:18 b.s.m.n.Client [INFO] Closing Netty Client
 Netty-Client-/10.27.13.121:6700
 2014-09-26 09:32:18 b.s.m.n.Client [INFO] Waiting for pending batchs to
 be sent with Netty-Client-/10.27.13.121:6700..., timeout: 60ms,
 pendings: 0
 2014-09-26 09:32:19 b.s.m.n.Client [INFO] New Netty Client, connect to
 10.27.13.121, 6700, config: , buffer_size: 5242880
 2014-09-26 09:32:19 b.s.m.n.Client [INFO] Reconnect started for
 Netty-Client-/10.27.13.121:6700... [0]
 2014-09-26 09:32:19 b.s.m.n.Client [INFO] connection established to a
 remote host Netty-Client-/10.27.13.121:6700, [id: 0x9dc224e6, /
 10.27.10.180:33881 = /10.27.13.121:6700]




 --
 - varun :)




-- 
- varun :)


Re: nette reconnects

2014-09-26 Thread Derek Dagit

This could be https://issues.apache.org/jira/browse/STORM-510

The send thread is blocked on a connection attempt, and so no messages 
get sent out until the connection is re-established or it times out.


--
Derek

On 9/26/14 13:47, Varun Vijayaraghavan wrote:

I first tried increasing the max_retries to a much higher number (300)
but that did not make a difference.

On Fri, Sep 26, 2014 at 2:46 PM, Varun Vijayaraghavan
varun@gmail.com mailto:varun@gmail.com wrote:

Hey,

I've been facing the same issues in my topologies. It seems like a
crash in a single worker would trigger a reconnect from other
workers for x amount of time (30 x 10s = ~300 seconds in your case)
before crashing themselves - thus leading to a catastrophic failure
in the topology.

There is a patch in 0.9.3 related to exponential backoff for netty
connections - which may address the issue - but until then I did two
things - a) increase the max_wait_ms to 15000 and b) decrease
supervisor.worker.start.timeout.secs to 30 - so that workers restart
earlier.

On Fri, Sep 26, 2014 at 2:06 PM, Tyson Norris tnor...@adobe.com
mailto:tnor...@adobe.com wrote:

Hi -
We are seeing workers dying and restarting quite a bit,
apparently from netty connection issues.

For example, the log below shows:
* Reconnect for worker at 121:6700
* connection established to 121:6700
* closing connection to 121:6700
* Reconnect started to 121:6700

all within 1 second.

We have netty config updated to:
storm.messaging.netty.max_retries: 30
storm.messaging.netty.max_wait_ms: 1
storm.messaging.netty.min_wait_ms: 1000

And the workers die pretty quickly because often 30 retries does
not end up with a connection.

Any suggestions for how to prevent netting from closing a
connection immediately? I could not see any obvious reason in
the code that this would happen.

Thanks
Tyson

2014-09-26 09:32:03 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6700... [5]
2014-09-26 09:32:04 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6701... [6]
2014-09-26 09:32:11 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.10.180:6701... [6]
2014-09-26 09:32:12 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.10.180:6702... [6]
2014-09-26 09:32:13 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6700... [6]
2014-09-26 09:32:14 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6701... [7]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6700... [7]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] connection established
to a remote host Netty-Client-/10.27.13.121:6700
http://10.27.13.121:6700, [id: 0xb8b33bef, /10.27.10.180:33880
http://10.27.10.180:33880 = /10.27.13.121:6700
http://10.27.13.121:6700]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Closing Netty Client
Netty-Client-/10.27.13.121:6700 http://10.27.13.121:6700
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Waiting for pending
batchs to be sent with Netty-Client-/10.27.13.121:6700...,
timeout: 60ms, pendings: 0
2014-09-26 09:32:19 b.s.m.n.Client [INFO] New Netty Client,
connect to 10.27.13.121, 6700, config: , buffer_size: 5242880
2014-09-26 09:32:19 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6700... [0]
2014-09-26 09:32:19 b.s.m.n.Client [INFO] connection established
to a remote host Netty-Client-/10.27.13.121:6700
http://10.27.13.121:6700, [id: 0x9dc224e6, /10.27.10.180:33881
http://10.27.10.180:33881 = /10.27.13.121:6700
http://10.27.13.121:6700]




--
- varun :)




--
- varun :)


Re: nette reconnects

2014-09-26 Thread Tyson Norris
@varun - I still see workers waiting, reconnecting, closing connections, and 
dying, when using a longer max_wait_ms and shorter worker.start timeout

@derek - based on that bug, I will try to see if using a single worker per node 
(currently 4 workers per node) makes a difference.

Thanks
Tyson


On Sep 26, 2014, at 12:10 PM, Derek Dagit der...@yahoo-inc.com wrote:

 This could be https://issues.apache.org/jira/browse/STORM-510
 
 The send thread is blocked on a connection attempt, and so no messages get 
 sent out until the connection is re-established or it times out.
 
 -- 
 Derek
 
 On 9/26/14 13:47, Varun Vijayaraghavan wrote:
 I first tried increasing the max_retries to a much higher number (300)
 but that did not make a difference.
 
 On Fri, Sep 26, 2014 at 2:46 PM, Varun Vijayaraghavan
 varun@gmail.com mailto:varun@gmail.com wrote:
 
Hey,
 
I've been facing the same issues in my topologies. It seems like a
crash in a single worker would trigger a reconnect from other
workers for x amount of time (30 x 10s = ~300 seconds in your case)
before crashing themselves - thus leading to a catastrophic failure
in the topology.
 
There is a patch in 0.9.3 related to exponential backoff for netty
connections - which may address the issue - but until then I did two
things - a) increase the max_wait_ms to 15000 and b) decrease
supervisor.worker.start.timeout.secs to 30 - so that workers restart
earlier.
 
On Fri, Sep 26, 2014 at 2:06 PM, Tyson Norris tnor...@adobe.com
mailto:tnor...@adobe.com wrote:
 
Hi -
We are seeing workers dying and restarting quite a bit,
apparently from netty connection issues.
 
For example, the log below shows:
* Reconnect for worker at 121:6700
* connection established to 121:6700
* closing connection to 121:6700
* Reconnect started to 121:6700
 
all within 1 second.
 
We have netty config updated to:
storm.messaging.netty.max_retries: 30
storm.messaging.netty.max_wait_ms: 1
storm.messaging.netty.min_wait_ms: 1000
 
And the workers die pretty quickly because often 30 retries does
not end up with a connection.
 
Any suggestions for how to prevent netting from closing a
connection immediately? I could not see any obvious reason in
the code that this would happen.
 
Thanks
Tyson
 
2014-09-26 09:32:03 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6700... [5]
2014-09-26 09:32:04 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6701... [6]
2014-09-26 09:32:11 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.10.180:6701... [6]
2014-09-26 09:32:12 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.10.180:6702... [6]
2014-09-26 09:32:13 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6700... [6]
2014-09-26 09:32:14 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6701... [7]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6700... [7]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] connection established
to a remote host Netty-Client-/10.27.13.121:6700
http://10.27.13.121:6700, [id: 0xb8b33bef, /10.27.10.180:33880
http://10.27.10.180:33880 = /10.27.13.121:6700
http://10.27.13.121:6700]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Closing Netty Client
Netty-Client-/10.27.13.121:6700 http://10.27.13.121:6700
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Waiting for pending
batchs to be sent with Netty-Client-/10.27.13.121:6700...,
timeout: 60ms, pendings: 0
2014-09-26 09:32:19 b.s.m.n.Client [INFO] New Netty Client,
connect to 10.27.13.121, 6700, config: , buffer_size: 5242880
2014-09-26 09:32:19 b.s.m.n.Client [INFO] Reconnect started for
Netty-Client-/10.27.13.121:6700... [0]
2014-09-26 09:32:19 b.s.m.n.Client [INFO] connection established
to a remote host Netty-Client-/10.27.13.121:6700
http://10.27.13.121:6700, [id: 0x9dc224e6, /10.27.10.180:33881
http://10.27.10.180:33881 = /10.27.13.121:6700
http://10.27.13.121:6700]
 
 
 
 
--
- varun :)
 
 
 
 
 --
 - varun :)



secure storm UI

2014-09-26 Thread Kushan Maskey
Is there a way to secure the storm UI page. Like enable logging to access
the page so only authorized people can only access it.

--
Kushan Maskey
817.403.7500
M. Miller  Associates http://mmillerassociates.com/
kushan.mas...@mmillerassociates.com


Re: secure storm UI

2014-09-26 Thread Derek Dagit
This is available in the security branch.  See 
https://github.com/apache/storm/blob/security/SECURITY.md


You do not need to enable all of the security features to get UI auth.

For authentication, look at ui.filter and ui.filter.params.

For authorization, nimbus.admins, ui.users, logs.users, and topology.users
--
Derek

On 9/26/14 14:34, Kushan Maskey wrote:

Is there a way to secure the storm UI page. Like enable logging to
access the page so only authorized people can only access it.

--
Kushan Maskey
817.403.7500
M. Miller  Associates http://mmillerassociates.com/
kushan.mas...@mmillerassociates.com
mailto:kushan.mas...@mmillerassociates.com


RE: nette reconnects

2014-09-26 Thread Gunderson, Richard-CW
We see exactly the same thing in our worker logs. I don't know if this is 
correct behavior, but just acknowledging that we see the same thing.

Richard Gunderson
Mobile: (612) 860-1676

-Original Message-
From: Tyson Norris [mailto:tnor...@adobe.com] 
Sent: Friday, September 26, 2014 1:06 PM
To: user@storm.incubator.apache.org
Subject: nette reconnects

Hi - 
We are seeing workers dying and restarting quite a bit, apparently from netty 
connection issues.

For example, the log below shows:
* Reconnect for worker at 121:6700
* connection established to 121:6700
* closing connection to 121:6700
* Reconnect started to 121:6700

all within 1 second.

We have netty config updated to:
storm.messaging.netty.max_retries: 30
storm.messaging.netty.max_wait_ms: 1
storm.messaging.netty.min_wait_ms: 1000

And the workers die pretty quickly because often 30 retries does not end up 
with a connection. 

Any suggestions for how to prevent netting from closing a connection 
immediately? I could not see any obvious reason in the code that this would 
happen.

Thanks
Tyson

2014-09-26 09:32:03 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6700... [5]
2014-09-26 09:32:04 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6701... [6]
2014-09-26 09:32:11 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.10.180:6701... [6]
2014-09-26 09:32:12 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.10.180:6702... [6]
2014-09-26 09:32:13 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6700... [6]
2014-09-26 09:32:14 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6701... [7]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6700... [7]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] connection established to a remote 
host Netty-Client-/10.27.13.121:6700, [id: 0xb8b33bef, /10.27.10.180:33880 = 
/10.27.13.121:6700]
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Closing Netty Client 
Netty-Client-/10.27.13.121:6700
2014-09-26 09:32:18 b.s.m.n.Client [INFO] Waiting for pending batchs to be sent 
with Netty-Client-/10.27.13.121:6700..., timeout: 60ms, pendings: 0
2014-09-26 09:32:19 b.s.m.n.Client [INFO] New Netty Client, connect to 
10.27.13.121, 6700, config: , buffer_size: 5242880
2014-09-26 09:32:19 b.s.m.n.Client [INFO] Reconnect started for 
Netty-Client-/10.27.13.121:6700... [0]
2014-09-26 09:32:19 b.s.m.n.Client [INFO] connection established to a remote 
host Netty-Client-/10.27.13.121:6700, [id: 0x9dc224e6, /10.27.10.180:33881 = 
/10.27.13.121:6700]



Trident Metrics Consumer

2014-09-26 Thread Raphael Hsieh
I've been following the tutorials here (
http://www.bigdata-cookbook.com/post/72320512609/storm-metrics-how-to)  to
create metrics in Storm.

However I am using Trident which abstracts bolts away from the user. How
can I go about creating metrics in trident ?

Thanks

-- 
Raphael Hsieh